Tuesday 28 October 2008

Validation: What is validation?

The seemingly simple question "What is Validation?" has different answers depending on your viewpoint and professional background. For example, I have spoken with scientists who use statistical analysis software. Their view of validation is straightforward software testing; put numbers in and check the numbers that come out.

But then what about related terms such as Qualification, Verification and Testing? Why use a term such as validation when you can just say Testing?

Within the regulated Life Sciences sector, validation is more that just software testing. And we can show how Validation defines an overall framework incorporating Qualification, Verification and Testing.

Define Validation

First, let's define Validation...or rather let the FDA define it for us:

Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.

There are some key terms here that we should take note of:

documented evidence: We need to write stuff down that shows evidence of validation; it is not enough to go test something and then say it's validated.

specific process: This is what makes validation different from verification or testing. Validation is about the process, not just the tools (software) used to execute the process.

predetermined specifications and quality attributes: we need to define what the process should do and how before we execute the process.

Qualification, Verification and Testing

So Validation addresses the entire system, including software, underling hardware (infrastructure), business process, operating procedures, training, life cycle through to retirement.

The validation of a system will encompass one or more Qualification phases. These may focus on a life cycle phase (e.g. design phase, installation) or may focus on a specific component (e.g. infrastructure, support services).

A Qualification phase will comprise some activities such as planning, specification, design, analysis. One of these activities will be a Verification activity. There are different types of verification activity, depending on the scope of the qualification phase. For example a design qualification phase will generally have code review and design review as verification activities.

Obviously, one form of verification is Testing. For example, an Operational Qualification phase would include System Testing, Integration Testing, User Acceptance Testing, etc...as verification activities.

Here is a diagram that should put everything in context:

Friday 19 September 2008

Standards: Incidents, Problems and CAPA

An analysis of FDA Warning Letters issued over the past few years shows some recurring themes (see previous posts). One of these that consistently stands out is the regulations cited more than any other; 21CFR Part 820.100 Corrective and Preventive Action closely followed by 21CFR 820.198 Complaint Files.

In this blog I want to highlight what CAPA is, and also place it in the context in terms of Incident Management and Problem Management, alongside Complaints.

Incident Management and Problem Management

ITIL has a clear explanation of Incident and Problem Management and the difference between them. It goes like this; consider the following analogy:

Every city has a stretch of road where accidents seem to occur on a regular basis; so called “accident black-spots”. When an accident happens, the police are usually the first on the scene, quickly followed by other emergency vehicles as required: ambulances, fire, tow truck, etc. The first order of business is to attend to the injured. Next is to get the traffic moving again.

This is the essence of Incident Management; it is reactive and looks for an immediate, short-term solution.

Somewhere, people are gathering information and analysing that accident, what may have caused it and how it may relate to other accidents which occurred along that same stretch of road. They analyse, among other things, traffic patterns, the time of day, weather conditions at the time, road signage. From this analysis, they seek to determine the ROOT CAUSE of the accidents and thus find a means of preventing accidents.

This is the essence of Problem Management; it is proactive and looks for a permanent solution to prevent further incidents.

Corrective and Preventive Action

FDA Guidance says the following;

Corrective action is a reactive tool for system improvement to ensure that significant problems do not recur.

and...

Being proactive is an essential tool in quality systems management. Succession planning, training, capturing institutional knowledge, and planning for personnel, policy, and process changes are preventive actions that will help ensure that potential problems and root causes are identified, possible consequences assessed, and appropriate actions considered.

So the focus in these activities is to find root causes and ways to stop problems happening in the future, rather than righting what has happened in the past.

It is important to understand that CAPA is not an Incident Management process - CAPA is all about Problem Management; it is the same as ITIL Problem Management, with both reactive (triggered by incidents/failures) and proactive (triggered by other sources) activities.

The Incident Management process is essentially addressed by the "Complaint Files" regulation for medical devices (and others focussing on manufacturing incidents and adverse events). Perhaps this is a source for the large volume of citations for violation of these regulations; companies lack an understanding of the interface between Complaints (incidents) and CAPA (problems), which is much easier to understand when viewed from the ITIL framework.

GAMP Honorable Mention

As a footnote to this, it is worth mentioning that GAMP4 did not really address either incident or problem management. This has been corrected in GAMP5 with the addition of the Operational Appendices O4 Incident Management and O5 Corrective and Preventive Action. Note how GAMP employs language recognisable to IT stakeholders (incidents) and regulatory stakeholders (CAPA), bridging the gap of understanding that may have existed before.

Sunday 7 September 2008

Standards: What is "best practice"??

Dilbert.com

There are a lot of things out there that claim to be representing "best practice". Here is a brief list of the management systems or approaches that are commonly mentioned:

ISO 12207: aims to be 'the' standard that defines all the tasks required for developing and maintaining software.

ISO 20000: describes the best practices for service management

ISO 27001: specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented ISMS

ISO 9001: set of requirements for a quality management system.

ISO 13485: requirements for a comprehensive management system for the design and manufacture of medical devices.

ISO 15504: Software Process Improvement and Capability dEtermination is a "framework for the assessment of processes".

COBIT: a set of best practices (framework) for information technology (IT) management

ITIL: a set of concepts and techniques for managing information technology (IT) infrastructure, development, and operations.

GAMP: a series of Good Practice Guides on several topics involved in drug manufacturing

CMMI: a process improvement approach that provides organizations with the essential elements of effective processes.

COSO: a common definition of internal controls, standards, and criteria against which companies and organizations can assess their control systems

So what are the differences? Well as you may deduce, there not that many real differences underlying these publications. Some focus on service management, and others on software development, and others on system or process controls, but in their efforts to broaden their appeal, they actually overlap to such an extent that they can often be mapped process for process - the only real difference is the language used to describe the tasks.

Here is a typical example of mapping between ITILv2 and COBITv4. And remembering that ISO20000 is derived from ITIL, then there is a map from COBIT to ISO20000.

So whichever system you choose to adopt, you can make a fairly safe bet that you will also be covering many of the requirements from the other models.

ITIL ProcessCOBIT
 ProcessControl ObjectiveCOBIT Process
SERVICE LEVEL MANAGEMENTDS 1DS 1.0Define and Manage Service Levels
The SLM ProcessDS 1DS 1.1Service Level Agreement Framework
Planning the ProcessDS 1DS 1.2Aspects of Service Level Agreements
Implementing the ProcessDS 1DS 1.2Aspects of Service Level Agreements
The On-going ProcessDS 1DS 1.5Review of Service Level Agreements and Contracts
SLA contents and key targetsDS 1DS 1.2Aspects of Service Level Agreements
Key Performance Indicators and metrics for SLM efficiency and effectivenessDS 1DS 1.4Monitoring and Reporting
    
FINANCIAL MANAGEMENT FOR IT SERVICESPO 5PO 5.0Manage the IT Investment
BudgetingPO 5PO 5.1Annual IT Operating Budget
Developing the IT Accounting systemPO 5PO 5.1Annual IT Operating Budget
Developing the Charging SystemDS 6DS 6.2Costing Procedures
Planning for IT Accounting and ChargingDS 6DS 6.1Chargeable Items
ImplementationDS 6DS 6.0Identify and Allocate Costs
Ongoing management and operationDS 6DS 6.3User Billing and Chargeback Procedures
    
    
CAPACITY MANAGEMENTDS 2DS 2.0Manage Third-Party Services
The Capacity Management processDS 3DS 3.0Manage Performance and Capacity
Activities in Capacity ManagementDS 3DS 3.7Capacity Management of Resources
Costs, benefits and possible problemsDS 3DS 3.7Capacity Management of Resources
Planning and implementationDS 3DS 3.0Manage Performance and Capacity
Review of the Capacity Management processDS 3DS 3.3Monitoring and Reporting
Interfaces with other SM processesn.a.n.a.n.a.
    
    
IT Service Continuity Management DS 4DS 4.0Ensure Continuous Service
Scope of ITSCMDS 4DS 4.1IT Continuity Framework
The Business Continuity LifecycleDS 4DS 4.1IT Continuity Framework
Management StructureDS 4DS 4.1IT Continuity Framework
Generating awarenessDS 4DS 4.1IT Continuity Framework
Interfaces with other SM processesn.a.n.a.n.a.
    
    
AVAILABILITY MANAGEMENT DS 4DS 4.0Ensure Continuous Service
Basic conceptsDS 4DS 4.2IT Continuity Plan Strategy and Philosophy
The Availability Management ProcessDS 4DS 4.0Ensure Continuous Service
The Cost of (Un)AvailabilityPO 9PO 9.4Assess Risks
Availability PlanningDS 3DS 3.2Availability Plan
Availability improvementDS 4DS 4.4Minimising IT Continuity Requirements
Availability measurement and reportingDS 3DS 3.3Monitoring and Reporting
Availability Management toolsDS 3DS 3.4Modeling Tools
Availability Management methods and techniquesDS 3DS 3.0Manage Performance and Capacity
    
THE SERVICE DESKDS 8DS 8.0Assist and Advise Customers
OverviewDS 8DS 8.1Help Desk
Implementing a Service Desk infrastructureDS 8DS 8.1Help Desk
Service Desk technologiesn.a.n.a.n.a.
Service Desk responsibilities, functions, staffing levels etcPO 4PO 4.4Roles and Responsibilities
Service Desk staffing skill setPO 7PO 7.4Personnel Training
Setting up a Service Desk environmentPO 8PO 8.1External Requirements Review
Service Desk education and trainingPO 7PO 7.4Personnel Training
Service Desk processes and proceduresDS 8DS 8.0Assist and Advise Customers
Incident reporting and reviewDS 5DS 5.10Violation and Security Activity Reports
    
    
INCIDENT MANAGEMENTDS 10DS 10.0Manage Problems and Incidents
Goal of Incident ManagementDS 10DS 10.0Manage Problems and Incidents
Scope of Incident ManagementDS 10DS 10.1Problem Management System
Basic conceptsDS 10DS 10.1Problem Management System
Benefits of Incident ManagementDS 10DS 10.1Problem Management System
Planning and implementationDS 10DS 10.1Problem Management System
Incident Management activitiesDS 10DS 10.3Problem Tracking and Audit Trail
Handling of major IncidentsDS 10DS 10.2Problem Escalation
Roles of the Incident Management processDS 10DS 10.0Manage Problems and Incidents
Key Performance IndicatorsDS 10DS 10.3Problem Tracking and Audit Trail
ToolsDS 10DS 10.1Problem Management System
    
    
PROBLEM MANAGEMENTDS 10DS 10.0Manage Problems and Incidents
Goal of Problem ManagementDS 10DS 10.0Manage Problems and Incidents
Scope of Problem ManagementDS 10DS 10.1Problem Management System
Basic conceptsDS 10DS 10.1Problem Management System
Benefits of Problem ManagementDS 10DS 10.1Problem Management System
Planning and implementationDS 10DS 10.1Problem Management System
Problem control activitiesDS 10DS 10.3Problem Tracking and Audit Trail
Error control activitiesDS 10DS 10.3Problem Tracking and Audit Trail
Proactive Problem ManagementDS 8DS 8.5Trend Analysis and Reporting
Providing information to the support organisationDS 8DS 8.5Trend Analysis and Reporting
MetricsDS 10DS 10.0Manage Problems and Incidents
Roles within Problem ManagementDS 10DS 10.0Manage Problems and Incidents
    
    
CONFIGURATION MANAGEMENTDS 9DS 9.0Manage the Configuration
Goal of Configuration ManagementDS 9DS 9.0Manage the Configuration
Scope of Configuration ManagementDS 9DS 9.0Manage the Configuration
Basic conceptsDS 9DS 9.1Configuration Recording
Benefits and possible problemsDS 9DS 9.1Configuration Recording
Planning and implementationDS 9DS 9.1Configuration Recording
ActivitiesDS 9DS 9.0Manage the Configuration
Process controlDS 9DS 9.0Manage the Configuration
Relations to other processesn.a.n.a.n.a.
Tools specific to the Configuration Management processn.a.n.a.n.a.
Impact of new technologyn.a.n.a.n.a.
Guidance on Configuration Managementn.a.n.a.n.a.
    
    
CHANGE MANAGEMENTAI 6AI 6.0Manage Changes
Goal of Change ManagementAI 6AI 6.0Manage Changes
Scope of Change ManagementAI 6AI 6.0Manage Changes
Basic conceptsAI 6AI 6.1Change Request Initiation and Control
Benefits, costs and possible problemsAI 6AI 6.2Impact Assessment
ActivitiesAI 6AI 6.0Manage Changes
Planning and implementationAI 6AI 6.0Manage Changes
Metrics and management reportingAI 6AI 6.2Impact Assessment
Software toolsAI 6AI 6.3Control of Changes
Impact of new technologyn.a.n.a.n.a.
    
    
RELEASE MANAGEMENTAI 6AI 6.0Manage Changes
Goal of Release ManagementAI 6AI 6.7Software Release Policy
Scope of Release ManagementAI 6AI 6.7Software Release Policy
Basic conceptsAI 6AI 6.7Software Release Policy
Benefits and possible problemsAI 6AI 6.7Software Release Policy
Planning and implementationAI 6AI 6.7Software Release Policy
Process controlAI 6AI 6.7Software Release Policy
Relations to other processesn.a.n.a.n.a.
Tools specific to the Release Management processn.a.n.a.n.a.
Guidance for successful Release ManagementAI 6AI 6.7Software Release Policy

Monday 25 August 2008

SAS 70 Reports: A sheep in wolfs clothing.

SAS 70 reports (particularly "Type II") are seen as some kind of certification that a (service) company can be trusted and has good processes and controls in place. A company that gains a favorable SAS 70 report often claims "bragging" rights about "getting SAS70 certified" or "passing the audit".

OK, there are many things wrong with that first paragraph. It seems there is a huge misconception about SAS 70 and what it is, and for me this is dangerous because a lot of folks overestimate the usefulness of SAS70 reports and indeed it is generally seen as a means of assuring compliance with Sarbanes-Oxley section 404 - which it does not, but we'll get to that.

Let's start with the idea that companies comply with SAS 70.

POINT 1: YOU CANNOT COMPLY WITH SAS 70
The way a SAS 70 audit works is this;
  • the service provider states some objectives they hope to achieve using "controls";
  • the service provider states the controls that they use to meet their objectives;
  • an independent auditor looks at evidence that the stated controls are operating and effective at meeting the stated objectives.
So basically the audit verifies that a company does what it says it does, and that what it does meets its own objectives.

POINT 2: SAS 70 DOES NOT ASSURE QUALITY OR BEST PRACTICE
As noted in point 1, the audit reports a companies compliance with their own procedures/controls. Now those controls may be slow, cumbersome, expensive, outdated, etc. But if they meet the objective stated by the company then there is nothing to report.

POINT 3: SAS 70 DOES NOT REQUIRE A MINIMUM SET OF CONTROLS
As stated on the SAS70.com website:

"Since service organizations are responsible for describing their controls and defining their control objectives, there is no published list of SAS 70 standards. Generally, the control objectives are specific to the service organization and their customers."
However, there are certain headlines that do need to be addressed
  1. Control Environment.
  2. Risk Assessment.
  3. Control Activities.
  4. Information and Communication.
  5. Monitoring.
So if a company defines 5 controls, that is what gets audited.
I think there is some inferred assumption that the "independent auditor" who performs the SAS 70 audit will use some best practice framework as a benchmark (such as COSO/COBIT), but this is not required.

POINT 4: THERE IS NO PASS OR FAIL...THERE IS NO CERTIFICATION
So as we can see from the preceding points, you cannot pass or fail because there are no independent criteria against which to score, and you cannot be certified for the same reason.
What can happen is that the report will describe some control that is operated incorrectly or ineffectively such that its defined objective is not met.


So finally what use is a SAS 70 report
Well it is promoted as a solution to confirming Sarbanes-Oxley compliance

The Wikipedia article even says:
"SOX heightened the focus placed on understanding the controls over financial reporting and identified a Type II SAS 70 report as the only acceptable method for a third party to assure a service organization's controls."
This is nonsense. A SAS 70 report is a very useful tool for communicating audit results. It is not a statement of compliance with SOX or any other regulation .

SAS 70 is useful and misleading at the same time - and which it is depends on how it is sold to you and how much you understand about the process.
A "white paper" highlight many of the failings of SAS 70 even exists; SAS 70: The Emperor Has No Clothes. This was written by an ISO17799 consultancy, so they have their own agenda to push, but I do not and have independently come to the same conclusions (as have other bloggers).

So what is the benefit of a "clean" SAS 70 Type II report. Here are a couple of thoughts:
  1. From sas70.com; "Many SAS 70 audit engagements result from user organizations making repeated requests of the service organization in order to gain an understanding of the internal control environment at the service organization." So it can save time when end user organisations want to audit a service provider. It is worth remembering that a SAS 70 report is meant to be an auditor-to-auditor communication tool.
  2. If a service company is actually making an effort to apply CoBIT or some other defined best practice, the audit can serve to highlight to them areas for improvement, but only if they ensure that they provide the appropriate criteria to the auditor (see point 2 above).
I believe the one thing to really remember is that a "clean" SAS 70 Type II report guarantees nothing. The end user company needs to define what it wants in terms of quality and control from its suppliers, and then it can use the SAS70 report to confirm if their criteria are met by the supplier's controls.

Friday 15 August 2008

Regulatory: FDA Warning Letter Top Citations

Looking at the past 3 years of warning letters we can count the number of letters that cite a specific regulation at least once. In this way we can see which part are cited in most letters by FDA and therefore the area in whioch companies most commonly fail.

As we can see, there are three areas that really stand out for continual citation;

  • 820.100 Corrective and preventive action...
  • 820.198 Complaint files....
  • 820.30 Design controls....

It is clear that the first two are co-dependant, and a failure in managing design controls will also feed into Complaints and CAPA.

Drilling down into the detail, we can see in individual warning letters that time and again, companies do not manage complaints correctly. Key failures include:

  • Not assessing complaints at all,
  • Dismissing complaints as not critical or not investigating fully (no risk assessment or justification),
  • Not implementing corrective actions,
  • Not describing preventative actions or managing risk from failures,
  • Lacking procedures,
  • Having procedures but not following the processes.
So the key message is treat complaints seriously, and if you do decide they do not require CAPA, make sure there is a documented rationale including risk analysis for that decision.

Wednesday 30 July 2008

Regulatory: FDA Exempts Phase 1 drugs from Part211

A couple of weeks (15-July) ago the FDA issued Docket FDA-2005-N-0170-0005. This lays out their decision to exempt Phase 1 investigational drugs from the requirements of Part211;

(c) An investigational drug for use in a phase 1 study, as described in
§ 312.21(a) of this chapter, is subject to the statutory requirements set forth in
21 U.S.C. 351(a)(2)(B). The production of such drug is exempt from compliance
with the regulations in part 211 of this chapter. However, this exemption does
not apply to an investigational drug for use in a phase 1 study once the
investigational drug has been made available for use by or for the sponsor
in a phase 2 or phase 3 study, as described in § 312.21(b) and (c) of this
chapter, or the drug has been lawfully marketed. If the investigational drug has
been made available in a phase 2 or phase 3 study or the drug has been
lawfully marketed, the drug for use in the phase 1 study must comply with
part 211.

Now here's what confuses me. In the preamble to this, the FDA state:

FDA believes this change...is appropriate because many of the issues
presented by the production of investigational drugs intended for use
in
the relatively small phase 1 clinical trials are different from
issues presented
by the production of drug products for
use in the larger phase 2 and phase 3 clinical trials or for commercial
marketing.

OK, so far so good, that makes sense...

Additionally, many of the specific requirements in the regulations in part
211 do not apply to the conditions under which many drugs for use in
phase 1 clinical trials are produced. For example, the concerns underlying the
regulations’ requirement for fully validated manufacturing processes,
rotation of the stock for drug product containers, the repackaging and
relabeling of drug products, and separate packaging and production areas
are generally not concerns for these very limited production investigational drug
products used in phase 1 clinical trials.


So this is a nice, clear rationale for exempting these types of drugs; risk-based and scientific.

BUT then they throw in a caveat...

However, once an investigational drug product has been manufactured by, or for,
a sponsor
and is available for use in a phase 2 or phase 3 study, thus demonstrating
an
intent to expose more subjects to the investigational drug and requiring that
the regulations’ CGMP requirements be met, the same investigational drug
product used in any subsequent phase 1 study by the same sponsor must be
manufactured in compliance with part 211.


So, you scale up production for Phase 2/3 and apply Part211 processes and controls, fine. But then if you decide to return to do a second Phase 1 trial, all those good reasons for not applying Part 211 cease to be valid!? How does that work? Surely, if I scrap a Phase 2 trial and want to repeat Phase 1 (for whatever reason) there exist the same, risk-based, scientific reasons for exempting the drug, i.e. small batches, stock rotation not feasible, repackaging and relabelling?
And then to top it all off, this only applies to when the same Sponsor does it. Meaning Sponsor A does Phase 1 under an IND, then does a Phase 2 under Part 211, then goes and does Phase 1 again, but this time has to still apply Part 211. Subsequently, along comes Sponsor B with the same drug, doing Phase 1 but only using an IND. OK, if the drug is commercially marketed and then goes through a Phase 1 trial for another indication, apply Part211 since the drug samples for trial will be just taken from the commercial stock, no problems.

So much for clarity. I have read most of the preamble and cannot see either a comment pointing out this scenario or any explanation of why the FDA have made the rule this way. So I must be missing something very obvious...let me know if you can see why a Phase 1 drug should be treated any differently before or after it has passed through another Phase.

The European Compliance Academy also reports on this here.

Tuesday 29 July 2008

Solutions: Implementing ITIL methodology

ITIL is all fine and dandy in concept, but often implementing the processes can be cumbersome and expensive. Where do you start? All those processes that overlap and integrate...

Well I found a really useful and easy to use solution; ServiceDesk Plus. Now I do not work for or have any affiliations with this company and I don't get anything for plugging the software. I just think it is actually a good product. I'm no technical expert, but even I could download the free trial to a Linux desktop PC and install and have it up and running in less than 30 minutes (and Linux is not the easiest system to do that with).

So have a look and try it out.

Saturday 26 July 2008

Validation: What's most important?

What is the most important part of a validation program or strategy? This seems like a question similar to "what's the best film of all time?". It doesn't look like there can be a definitive answer; for some people it is "The Godfather", for others it would be the oft cited "Citizen Kane" and still others could not give a single answer* as there are too many to choose from. It's a very personal thing.

However when it comes to the components of the system life cycle there are a limited number of candidates to consider. Suggestions might include:
  • Validation Planning; obviously the first and most important part of the lifecycle?
  • Change Management Process; how can you maintain any control without this?
  • User/System Requirements; the bedrock of developing and testing a system is a solid set of requirements, surely?
Well these and others are strong candidates for being validation MVP (most valued part), but here's my choice and I'll explain why...

Traceability Management.

Saywhatnow? Let's discuss what we mean by this term.

Traceability can be imagined to be a web-like structure that connects the elements of a system or life cycle and thus enables an understanding of how items are related and dependent.

GAMP5 (Main Body section 4.2.5.4) states that traceability is a process for ensuring that:


  • Requirements are addressed and traceable to appropriate functional and design elements;
  • Requirements can be traced to the appropriate verification.
[Note this implies a relation between design elements and verification.]


But when you think about what traceability is really about and why it is important, you can see that this is way short of what we should consider.

Traceability underpins the system life cycle, doth during development and (if you get it right) during operation. But without effective traceability management most other process would not be practicable; indeed the overall objectives of validation would be pretty hard to realise.

Here's some examples of how effective traceability management enables other areas to perform effectively:

Change Management; very important, but change to a single configuration item will often have impacts and dependencies on others . So to ensure and maintain the veracity of the system architecture (both technical and documentation) we need to know what the relations are between configuration items. So when we, say, change the code that generates a report traceability can highlight that there are also technical specifications and user procedures that need to be updated.

Design/Development; OK, so we have a set of requirements and these are worked over by the development team into a set of design specifications, describing how our requirements can or will be met by the system. How can we verify that all of our requirements are actually described by this design? We can just walk through the requirements one by one and see how the design fits. But in reality, design plans are liable to change or requirements may be expanded or reduced; there may not be a solution ready for some requirements until a later date. Maintaining traceability between requirements and design can ensure that all requirements are addressed and none get forgotten. And if we get this right, this traceability will be invaluable during operational support, since it becomes like a map around the system; the people doing the support will normally not be the same people who built the system, and they will need a map.

Test Planning and Management: Test Planning requires a known list of things to test. Effective traceability provides a list of items that may require testing, and the dependencies for such testing, such as security authorisations related to a functional area.
During and following Test Execution, traceability can be used to demonstrate the execution status of all tests so we can easily track where we are in the testing cycle. And perhaps most importantly, at the end of testing we can use traceability to show that everything we needed to test has actually been successfully tested.

Incident and Problem Management/CAPA: As with Change Management, the analysis of incidents and problems is aided by understanding the relationships of the configuration items involved. For example, a software error presented from one area of system functionality may be the result of a bug in another area. Traceability serves to highlight possible areas of resolution.

Risk Management: Risk assessment and control should be performed at various levels throughout the development, e.g. high level project risk, detailed functional risk. Risk assessment may identify risk scenarios related to requirements/design, and risk controls may be implemented as elements of system design, which should then be verified as effective. Managing the traceability between system elements and their risks and between risks and their mitigation, and between mitigation and their verification are a necessary part of a good risk-based approach to validation.

So from the simplified GAMP5 model above we can develop a real-world model of relationships:


This illustrates the full range of relationships that could/should be defined for a large complex system (e.g. a global ERP system). But remember that any validation process must be scaled appropriately. Scaling of traceability should be primarily based on system complexity and size; e.g. for a simple system with no configuration, used "out of box", it may be acceptable to only relate requirements to testing. Another consideration is what tools are available to manage the traceability.

*By the way...my favourite film, if I had to pick just one...Casablanca.

Friday 25 July 2008

Regulatory: FDA Warning Letter CSV Analysis

Note: All data correct as of 23-Jul-2008.

When the FDA issues a warning letter to a company, it lists out the critical non-compliances and concerns the inspectors found. The warning letter also cites the specific parts of the regulation that there is a non-compliance with. Sometimes a computer system is involved in the non-compliance in one of two ways:
  • (a) the software is part of the product and not appropriately validated/controlled (such as software embedded in a medical device);
  • (b) the software is used to manufacture product or control data relevant to the product (such as document control or ERP systems)
Within my warning letter database, I flag each letter that contains a citation related to a computer system. Across all life sciences companies cited the percentage of letters that have a CSV citation is around 10-12%. However we can see that there has been a small but steady increase in citations related to computer systems over the past few years; this is inevitable as industry increase its dependence on computer systems in all areas of the enterprise and FDA inspectors become more aware of computer use and more knowledgeable about the risks they can present to product safety and consumer health.

I also track whether a warning letter was issued as a result of foreign or US domestic inspection. This highlights an interesting bias:

The domestic warning letters show the same trend as the overall number, as expected since the domestic letters hugely outweigh foreign letters and thus skew the overall numbers.

However, when you extract the data for foreign warning letters only, the percentage that cite computer systems jumps to around 20-25%. This could be an artifact of the data due to the relatively small sample size, but when it happens in 3 out of 4 years it looks like a real phenomenon.

FDA does cite computer systems more during foreign inspections than domestic inspections. The key question is: Why is this? Here are some thoughts:

  • The level of compliance in foreign companies is actually lower than that found in US companies. This may be true since a number of inspections take place in emerging economies such as China or India. Local regulations in these regions generally have lower expectations than the Code of Federal Regulations, and are often less rigorously enforced.
  • Foreign companies do not prepare adequately for FDA inspections, leading to a poor presentation of their computer systems and supporting processes. This is probably true for companies operating in jurisdictions with mature regulatory governance such as the UK/EU and Japan, since these regions also have regulatory requirements for computer systems of a similar level to the FDA, so it is unlikely that the actual level of compliance that low.
  • US Companies respond appropriately the FDA Form 483 that lists inspection findings.
    After an inspection, a form 483 is provided to the company, listing deficiencies. It is not mandatory that the company responds to this, detailing how it will address the FDA’s findings. Often, a company does a poor job of this and the FDA follows up with a warning letter. Non-us companies have less experience in this than US companies and will be issued with more warning letters as a result.
    Note that the FDA provided a presentation on “Writing An Effective 483 Response” at the 5th Annual FDA and the Changing Paradigm for HCT/P Regulation in January 2009 to address this topic.
  • Fewer US companies rely on computer systems and therefore these are not a factor during an inspection. Historically, US industry has not been an “early adopter” of new technologies and processes, and is slow to change. For example, it is only over the past few years that US industry and the FDA have really begun to acknowledge international standards such as ISO and ICH. So although US companies may be using current software systems, they may take a more “conservative” approach and still rely heavily on paper based records and data to perform regulated activities, rather than implementing a fully computerised system.
I think all of these points play a part in causing a higher CSV citation percentage for foreign inspection.

Of course, a more cynical view is that the FDA applies higher standards to foreign companies than it does to domestic companies.

Thursday 24 July 2008

Standards: ITIL v3...any good?

So ITIL v3 is in the wild. Has been for a while now. Is it any better than ITIL v2? Does it say anything new? Is it actually usable?

I think ITIL v3 is like Windows Vista...everyone had grown to love and understand its predecessor when a new shiny version comes along that really doesn't deliver anything spectacularly new or useful and has no compelling reason to be used. There's nothing wrong with the old version.

So why does it exist? Well, first let's look at the driving forces behind ITIL v3; consultancy companies. Accenture and others seems to be a major player in this, having co-authored a lot of the content - just look at the first line of the first book of ITIL v3:

"How do you become not optional?", William D. Green, CEO, Accenture

Can anyone say "Shameless promotion"?

Who will most benefit from a new ITIL version; I might suggest that certain consultancy companies who provide ITIL training and advisory services might have a lot to gain from a new ITIL, just when everyone was getting to grips with v2 (and hence not really needing those consultancy services anymore). Conflicts of interests?

So, maybe a cynical answer to Mr. Green's question is:

"You reshape the established system to your own design so everyone has to come and pay you to explain how it works." Brilliant.

Here's an example of this:
In ITIL v2 we had Incident Management and we had Problem Management.
Now we have Incident Management, "Request Fulfilment" and Problem Management.
What is this new and strange process? Well it is management of "routine" incidents or Service Requests (examples given in ITIL are "...e.g. a request to change a password, a request to install an additional software application onto a particular workstation, a request to relocate some items of desktop equipment...") .

By the way, it then goes on to say a few paragraphs later
"Note, however, that there is a significant difference here – an incident is usually an unplanned event whereas a Service Request is usually something that can and should be planned!"
So how exactly do you plan for someone forgetting their password and requesting it be changed?!

Well, I thought these events were covered just fine in v2 by the service management and incident management processes; you just use appropriate categorisation of an Incident as a Service Request. But here they have split a hair and come up with a whole new chapter of waffle.

OK, there is some extra stuff there that is useful, even if "borrowed" from existing standards - Access Management is one useful addition (ISO17799 anyone?). But seriously, was there really a need for an whole number increment of ITIL - I don't think so. Adding to and popularising ITIL v2 would have been fine.

How many books are there for ITIL v2? Two I hear? What, just Service Delivery and Service Support? What about the Application Management, ICT Infrastructure Management and Planning to implement Service Management books? There are FIVE books that comprise ITIL v2 - who use those last 3?

So in summary, I think ITIL v3 exists because ITIL consultants wanted it rather than IT managers wanting it. Use ITIL v2 and don't worry about certification - you need ISO 20000 certification anyway, so use ITIL v2 to inform your choices on how to be ISO 20000 compliant.

Wednesday 23 July 2008

Regulatory: FDA Warning Letter General Trend

Over the past few years I have been reading every warning letter published by the FDA and adding them to a database. I now have a database of more than 3000 warning letters, with metadata such as company type, whether it is CSV relevant, which parts of 21CFR are cited, whether it is a foreign or domestic inspection, etc.

This data can be analysed to detect trends and correlations arising fro FDA inspections.

I will be publishing my analysis of the FDA Warning Letters regularly.

Let's start with a simple analysis: How many warning letters are the FDA issuing every year?

This graph shows the total warning letters issued across all industries as of 23-July-2008. So the 2008 figure is much lower since the year is just over half finished. However, as we can see it has been dropping continuously since 2004, and 2008 is on course to follow this trend.

Combine this with a recent press report stating that the FDA is looking to recruit something in the region of 2500 new staff, and I think we can see that the FDA is not able to perform as many inspections as they once could.

Tuesday 22 July 2008

Regulatory: FDA Proposed rule shot down

Towards the end of 2007, the FDA issued the proposed rule "Amendment to the Current Good Manufacturing Practice Regulations for Finished Pharmaceuticals".

You can see the proposed rule, comments and subsequent withdrawal notice here.

I commented (as did a number of other companies) in a response to the FDA as follows:


GENERAL COMMENT

The Agency's provision of clarification in this area is to be welcomed, but the proposed ruling has a potential to conflict with current industry practice and curb the development of good practice contrary to ASTM E2500 .

Specifically, we question the approach expressed in the proposed change to 211.103 and 211.188.

These changes are intended to “clarify the agency's longstanding interpretation of, or increase latitude for manufacturers in complying with, preexisting CGMP requirements”. In our opinion they do not achieve this goal, but rather confuse the agency’s intent with respect to the requirements of 211.68.

The agency states in the preamble (Section II. D);

“we are amending Sec. 211.101(c) and (d), 211.103, 211.182, and 211.188(b)(11) to indicate that the use of automated equipment under Sec. 211.68 may eliminate the need for verification by a second individual”

However the proposed changes will still require verification by a second individual, with the first “individual” being an automated system.

Our understanding of this proposed change is that if a calculation of yield is performed by an automated (computer) system, then that calculation must also be verified manually (211.103). The person manually verifying the calculation must then be identified in the batch records for that operation (211.188).

Currently, under direction from predicate rules such as 211.68(b), if an automated (computer) system were employed to calculate yield, that function would be validated. The rationale for appropriately validating the function is that the function can be proven to be accurate and consistent and therefore negate the need for manual verification. In effect, the manual verification is appropriately performed during the validation exercise (using a range of test data and positive and negative test cases), thus ensuring future accurate operation in a controlled system.

Under current good practice this means that a manufacturer will spend time and resource in validating an automated function (such as yield calculation) knowing that during subsequent operation they can be confident of a consistently accurate output given accurate inputs, and therefore that output does not need to be re-checked manually. This has an operational time/cost benefit that is a major incentive for a manufacturer to invest in the initial validation effort.

We believe that although such calculations potentially impact product quality and patient safety, the use of appropriately validated computerized systems is in line with ASTM E2500 which places the emphasis on the appropriate verification of systems.

However, the proposed change subverts this paradigm and puts into question the value of validating such functions. In essence, pharmaceutical manufacturers may question the benefit of validating a function that must additionally be manually verified every time it operates?

Our concern is that pharmaceutical manufacturers are required to expend time and effort in the validation of the automated system but will no longer have the benefit of improved process efficiency and operational cost saving.

Additionally, given that “this proposed rule represents the first increment of modifications to parts 210 and 211”, we are concerned that similar changes might be considered for other sections where an automated system may be used to perform a function.

We believe that a pragmatic approach would encompass validation of the automated system (as required in 211.68(b)) and that other rules such as 211.103 would require a verification of data entry and/or resulting output e.g. the second person should verify that the input data and validated result is included in, for example, the batch record, but would not be required to recalculate the result so long as this has been performed by an appropriately validated system.

We believe this would reflect the current understanding and practices within the industry and the intent by the agency to “encourage innovation and the development of improved manufacturing technologies”.


Most companies who responded presented similar arguments. However, it is worth noting that some companies actually welcomed the proposed ruling as a good thing(!) that clarified the situation. It is not clear if they actually read the proposed rule, or just wanted to get their names on the FDA website as a respondent.