Wednesday 30 July 2008

Regulatory: FDA Exempts Phase 1 drugs from Part211

A couple of weeks (15-July) ago the FDA issued Docket FDA-2005-N-0170-0005. This lays out their decision to exempt Phase 1 investigational drugs from the requirements of Part211;

(c) An investigational drug for use in a phase 1 study, as described in
§ 312.21(a) of this chapter, is subject to the statutory requirements set forth in
21 U.S.C. 351(a)(2)(B). The production of such drug is exempt from compliance
with the regulations in part 211 of this chapter. However, this exemption does
not apply to an investigational drug for use in a phase 1 study once the
investigational drug has been made available for use by or for the sponsor
in a phase 2 or phase 3 study, as described in § 312.21(b) and (c) of this
chapter, or the drug has been lawfully marketed. If the investigational drug has
been made available in a phase 2 or phase 3 study or the drug has been
lawfully marketed, the drug for use in the phase 1 study must comply with
part 211.

Now here's what confuses me. In the preamble to this, the FDA state:

FDA believes this change...is appropriate because many of the issues
presented by the production of investigational drugs intended for use
in
the relatively small phase 1 clinical trials are different from
issues presented
by the production of drug products for
use in the larger phase 2 and phase 3 clinical trials or for commercial
marketing.

OK, so far so good, that makes sense...

Additionally, many of the specific requirements in the regulations in part
211 do not apply to the conditions under which many drugs for use in
phase 1 clinical trials are produced. For example, the concerns underlying the
regulations’ requirement for fully validated manufacturing processes,
rotation of the stock for drug product containers, the repackaging and
relabeling of drug products, and separate packaging and production areas
are generally not concerns for these very limited production investigational drug
products used in phase 1 clinical trials.


So this is a nice, clear rationale for exempting these types of drugs; risk-based and scientific.

BUT then they throw in a caveat...

However, once an investigational drug product has been manufactured by, or for,
a sponsor
and is available for use in a phase 2 or phase 3 study, thus demonstrating
an
intent to expose more subjects to the investigational drug and requiring that
the regulations’ CGMP requirements be met, the same investigational drug
product used in any subsequent phase 1 study by the same sponsor must be
manufactured in compliance with part 211.


So, you scale up production for Phase 2/3 and apply Part211 processes and controls, fine. But then if you decide to return to do a second Phase 1 trial, all those good reasons for not applying Part 211 cease to be valid!? How does that work? Surely, if I scrap a Phase 2 trial and want to repeat Phase 1 (for whatever reason) there exist the same, risk-based, scientific reasons for exempting the drug, i.e. small batches, stock rotation not feasible, repackaging and relabelling?
And then to top it all off, this only applies to when the same Sponsor does it. Meaning Sponsor A does Phase 1 under an IND, then does a Phase 2 under Part 211, then goes and does Phase 1 again, but this time has to still apply Part 211. Subsequently, along comes Sponsor B with the same drug, doing Phase 1 but only using an IND. OK, if the drug is commercially marketed and then goes through a Phase 1 trial for another indication, apply Part211 since the drug samples for trial will be just taken from the commercial stock, no problems.

So much for clarity. I have read most of the preamble and cannot see either a comment pointing out this scenario or any explanation of why the FDA have made the rule this way. So I must be missing something very obvious...let me know if you can see why a Phase 1 drug should be treated any differently before or after it has passed through another Phase.

The European Compliance Academy also reports on this here.

Tuesday 29 July 2008

Solutions: Implementing ITIL methodology

ITIL is all fine and dandy in concept, but often implementing the processes can be cumbersome and expensive. Where do you start? All those processes that overlap and integrate...

Well I found a really useful and easy to use solution; ServiceDesk Plus. Now I do not work for or have any affiliations with this company and I don't get anything for plugging the software. I just think it is actually a good product. I'm no technical expert, but even I could download the free trial to a Linux desktop PC and install and have it up and running in less than 30 minutes (and Linux is not the easiest system to do that with).

So have a look and try it out.

Saturday 26 July 2008

Validation: What's most important?

What is the most important part of a validation program or strategy? This seems like a question similar to "what's the best film of all time?". It doesn't look like there can be a definitive answer; for some people it is "The Godfather", for others it would be the oft cited "Citizen Kane" and still others could not give a single answer* as there are too many to choose from. It's a very personal thing.

However when it comes to the components of the system life cycle there are a limited number of candidates to consider. Suggestions might include:
  • Validation Planning; obviously the first and most important part of the lifecycle?
  • Change Management Process; how can you maintain any control without this?
  • User/System Requirements; the bedrock of developing and testing a system is a solid set of requirements, surely?
Well these and others are strong candidates for being validation MVP (most valued part), but here's my choice and I'll explain why...

Traceability Management.

Saywhatnow? Let's discuss what we mean by this term.

Traceability can be imagined to be a web-like structure that connects the elements of a system or life cycle and thus enables an understanding of how items are related and dependent.

GAMP5 (Main Body section 4.2.5.4) states that traceability is a process for ensuring that:


  • Requirements are addressed and traceable to appropriate functional and design elements;
  • Requirements can be traced to the appropriate verification.
[Note this implies a relation between design elements and verification.]


But when you think about what traceability is really about and why it is important, you can see that this is way short of what we should consider.

Traceability underpins the system life cycle, doth during development and (if you get it right) during operation. But without effective traceability management most other process would not be practicable; indeed the overall objectives of validation would be pretty hard to realise.

Here's some examples of how effective traceability management enables other areas to perform effectively:

Change Management; very important, but change to a single configuration item will often have impacts and dependencies on others . So to ensure and maintain the veracity of the system architecture (both technical and documentation) we need to know what the relations are between configuration items. So when we, say, change the code that generates a report traceability can highlight that there are also technical specifications and user procedures that need to be updated.

Design/Development; OK, so we have a set of requirements and these are worked over by the development team into a set of design specifications, describing how our requirements can or will be met by the system. How can we verify that all of our requirements are actually described by this design? We can just walk through the requirements one by one and see how the design fits. But in reality, design plans are liable to change or requirements may be expanded or reduced; there may not be a solution ready for some requirements until a later date. Maintaining traceability between requirements and design can ensure that all requirements are addressed and none get forgotten. And if we get this right, this traceability will be invaluable during operational support, since it becomes like a map around the system; the people doing the support will normally not be the same people who built the system, and they will need a map.

Test Planning and Management: Test Planning requires a known list of things to test. Effective traceability provides a list of items that may require testing, and the dependencies for such testing, such as security authorisations related to a functional area.
During and following Test Execution, traceability can be used to demonstrate the execution status of all tests so we can easily track where we are in the testing cycle. And perhaps most importantly, at the end of testing we can use traceability to show that everything we needed to test has actually been successfully tested.

Incident and Problem Management/CAPA: As with Change Management, the analysis of incidents and problems is aided by understanding the relationships of the configuration items involved. For example, a software error presented from one area of system functionality may be the result of a bug in another area. Traceability serves to highlight possible areas of resolution.

Risk Management: Risk assessment and control should be performed at various levels throughout the development, e.g. high level project risk, detailed functional risk. Risk assessment may identify risk scenarios related to requirements/design, and risk controls may be implemented as elements of system design, which should then be verified as effective. Managing the traceability between system elements and their risks and between risks and their mitigation, and between mitigation and their verification are a necessary part of a good risk-based approach to validation.

So from the simplified GAMP5 model above we can develop a real-world model of relationships:


This illustrates the full range of relationships that could/should be defined for a large complex system (e.g. a global ERP system). But remember that any validation process must be scaled appropriately. Scaling of traceability should be primarily based on system complexity and size; e.g. for a simple system with no configuration, used "out of box", it may be acceptable to only relate requirements to testing. Another consideration is what tools are available to manage the traceability.

*By the way...my favourite film, if I had to pick just one...Casablanca.

Friday 25 July 2008

Regulatory: FDA Warning Letter CSV Analysis

Note: All data correct as of 23-Jul-2008.

When the FDA issues a warning letter to a company, it lists out the critical non-compliances and concerns the inspectors found. The warning letter also cites the specific parts of the regulation that there is a non-compliance with. Sometimes a computer system is involved in the non-compliance in one of two ways:
  • (a) the software is part of the product and not appropriately validated/controlled (such as software embedded in a medical device);
  • (b) the software is used to manufacture product or control data relevant to the product (such as document control or ERP systems)
Within my warning letter database, I flag each letter that contains a citation related to a computer system. Across all life sciences companies cited the percentage of letters that have a CSV citation is around 10-12%. However we can see that there has been a small but steady increase in citations related to computer systems over the past few years; this is inevitable as industry increase its dependence on computer systems in all areas of the enterprise and FDA inspectors become more aware of computer use and more knowledgeable about the risks they can present to product safety and consumer health.

I also track whether a warning letter was issued as a result of foreign or US domestic inspection. This highlights an interesting bias:

The domestic warning letters show the same trend as the overall number, as expected since the domestic letters hugely outweigh foreign letters and thus skew the overall numbers.

However, when you extract the data for foreign warning letters only, the percentage that cite computer systems jumps to around 20-25%. This could be an artifact of the data due to the relatively small sample size, but when it happens in 3 out of 4 years it looks like a real phenomenon.

FDA does cite computer systems more during foreign inspections than domestic inspections. The key question is: Why is this? Here are some thoughts:

  • The level of compliance in foreign companies is actually lower than that found in US companies. This may be true since a number of inspections take place in emerging economies such as China or India. Local regulations in these regions generally have lower expectations than the Code of Federal Regulations, and are often less rigorously enforced.
  • Foreign companies do not prepare adequately for FDA inspections, leading to a poor presentation of their computer systems and supporting processes. This is probably true for companies operating in jurisdictions with mature regulatory governance such as the UK/EU and Japan, since these regions also have regulatory requirements for computer systems of a similar level to the FDA, so it is unlikely that the actual level of compliance that low.
  • US Companies respond appropriately the FDA Form 483 that lists inspection findings.
    After an inspection, a form 483 is provided to the company, listing deficiencies. It is not mandatory that the company responds to this, detailing how it will address the FDA’s findings. Often, a company does a poor job of this and the FDA follows up with a warning letter. Non-us companies have less experience in this than US companies and will be issued with more warning letters as a result.
    Note that the FDA provided a presentation on “Writing An Effective 483 Response” at the 5th Annual FDA and the Changing Paradigm for HCT/P Regulation in January 2009 to address this topic.
  • Fewer US companies rely on computer systems and therefore these are not a factor during an inspection. Historically, US industry has not been an “early adopter” of new technologies and processes, and is slow to change. For example, it is only over the past few years that US industry and the FDA have really begun to acknowledge international standards such as ISO and ICH. So although US companies may be using current software systems, they may take a more “conservative” approach and still rely heavily on paper based records and data to perform regulated activities, rather than implementing a fully computerised system.
I think all of these points play a part in causing a higher CSV citation percentage for foreign inspection.

Of course, a more cynical view is that the FDA applies higher standards to foreign companies than it does to domestic companies.

Thursday 24 July 2008

Standards: ITIL v3...any good?

So ITIL v3 is in the wild. Has been for a while now. Is it any better than ITIL v2? Does it say anything new? Is it actually usable?

I think ITIL v3 is like Windows Vista...everyone had grown to love and understand its predecessor when a new shiny version comes along that really doesn't deliver anything spectacularly new or useful and has no compelling reason to be used. There's nothing wrong with the old version.

So why does it exist? Well, first let's look at the driving forces behind ITIL v3; consultancy companies. Accenture and others seems to be a major player in this, having co-authored a lot of the content - just look at the first line of the first book of ITIL v3:

"How do you become not optional?", William D. Green, CEO, Accenture

Can anyone say "Shameless promotion"?

Who will most benefit from a new ITIL version; I might suggest that certain consultancy companies who provide ITIL training and advisory services might have a lot to gain from a new ITIL, just when everyone was getting to grips with v2 (and hence not really needing those consultancy services anymore). Conflicts of interests?

So, maybe a cynical answer to Mr. Green's question is:

"You reshape the established system to your own design so everyone has to come and pay you to explain how it works." Brilliant.

Here's an example of this:
In ITIL v2 we had Incident Management and we had Problem Management.
Now we have Incident Management, "Request Fulfilment" and Problem Management.
What is this new and strange process? Well it is management of "routine" incidents or Service Requests (examples given in ITIL are "...e.g. a request to change a password, a request to install an additional software application onto a particular workstation, a request to relocate some items of desktop equipment...") .

By the way, it then goes on to say a few paragraphs later
"Note, however, that there is a significant difference here – an incident is usually an unplanned event whereas a Service Request is usually something that can and should be planned!"
So how exactly do you plan for someone forgetting their password and requesting it be changed?!

Well, I thought these events were covered just fine in v2 by the service management and incident management processes; you just use appropriate categorisation of an Incident as a Service Request. But here they have split a hair and come up with a whole new chapter of waffle.

OK, there is some extra stuff there that is useful, even if "borrowed" from existing standards - Access Management is one useful addition (ISO17799 anyone?). But seriously, was there really a need for an whole number increment of ITIL - I don't think so. Adding to and popularising ITIL v2 would have been fine.

How many books are there for ITIL v2? Two I hear? What, just Service Delivery and Service Support? What about the Application Management, ICT Infrastructure Management and Planning to implement Service Management books? There are FIVE books that comprise ITIL v2 - who use those last 3?

So in summary, I think ITIL v3 exists because ITIL consultants wanted it rather than IT managers wanting it. Use ITIL v2 and don't worry about certification - you need ISO 20000 certification anyway, so use ITIL v2 to inform your choices on how to be ISO 20000 compliant.

Wednesday 23 July 2008

Regulatory: FDA Warning Letter General Trend

Over the past few years I have been reading every warning letter published by the FDA and adding them to a database. I now have a database of more than 3000 warning letters, with metadata such as company type, whether it is CSV relevant, which parts of 21CFR are cited, whether it is a foreign or domestic inspection, etc.

This data can be analysed to detect trends and correlations arising fro FDA inspections.

I will be publishing my analysis of the FDA Warning Letters regularly.

Let's start with a simple analysis: How many warning letters are the FDA issuing every year?

This graph shows the total warning letters issued across all industries as of 23-July-2008. So the 2008 figure is much lower since the year is just over half finished. However, as we can see it has been dropping continuously since 2004, and 2008 is on course to follow this trend.

Combine this with a recent press report stating that the FDA is looking to recruit something in the region of 2500 new staff, and I think we can see that the FDA is not able to perform as many inspections as they once could.

Tuesday 22 July 2008

Regulatory: FDA Proposed rule shot down

Towards the end of 2007, the FDA issued the proposed rule "Amendment to the Current Good Manufacturing Practice Regulations for Finished Pharmaceuticals".

You can see the proposed rule, comments and subsequent withdrawal notice here.

I commented (as did a number of other companies) in a response to the FDA as follows:


GENERAL COMMENT

The Agency's provision of clarification in this area is to be welcomed, but the proposed ruling has a potential to conflict with current industry practice and curb the development of good practice contrary to ASTM E2500 .

Specifically, we question the approach expressed in the proposed change to 211.103 and 211.188.

These changes are intended to “clarify the agency's longstanding interpretation of, or increase latitude for manufacturers in complying with, preexisting CGMP requirements”. In our opinion they do not achieve this goal, but rather confuse the agency’s intent with respect to the requirements of 211.68.

The agency states in the preamble (Section II. D);

“we are amending Sec. 211.101(c) and (d), 211.103, 211.182, and 211.188(b)(11) to indicate that the use of automated equipment under Sec. 211.68 may eliminate the need for verification by a second individual”

However the proposed changes will still require verification by a second individual, with the first “individual” being an automated system.

Our understanding of this proposed change is that if a calculation of yield is performed by an automated (computer) system, then that calculation must also be verified manually (211.103). The person manually verifying the calculation must then be identified in the batch records for that operation (211.188).

Currently, under direction from predicate rules such as 211.68(b), if an automated (computer) system were employed to calculate yield, that function would be validated. The rationale for appropriately validating the function is that the function can be proven to be accurate and consistent and therefore negate the need for manual verification. In effect, the manual verification is appropriately performed during the validation exercise (using a range of test data and positive and negative test cases), thus ensuring future accurate operation in a controlled system.

Under current good practice this means that a manufacturer will spend time and resource in validating an automated function (such as yield calculation) knowing that during subsequent operation they can be confident of a consistently accurate output given accurate inputs, and therefore that output does not need to be re-checked manually. This has an operational time/cost benefit that is a major incentive for a manufacturer to invest in the initial validation effort.

We believe that although such calculations potentially impact product quality and patient safety, the use of appropriately validated computerized systems is in line with ASTM E2500 which places the emphasis on the appropriate verification of systems.

However, the proposed change subverts this paradigm and puts into question the value of validating such functions. In essence, pharmaceutical manufacturers may question the benefit of validating a function that must additionally be manually verified every time it operates?

Our concern is that pharmaceutical manufacturers are required to expend time and effort in the validation of the automated system but will no longer have the benefit of improved process efficiency and operational cost saving.

Additionally, given that “this proposed rule represents the first increment of modifications to parts 210 and 211”, we are concerned that similar changes might be considered for other sections where an automated system may be used to perform a function.

We believe that a pragmatic approach would encompass validation of the automated system (as required in 211.68(b)) and that other rules such as 211.103 would require a verification of data entry and/or resulting output e.g. the second person should verify that the input data and validated result is included in, for example, the batch record, but would not be required to recalculate the result so long as this has been performed by an appropriately validated system.

We believe this would reflect the current understanding and practices within the industry and the intent by the agency to “encourage innovation and the development of improved manufacturing technologies”.


Most companies who responded presented similar arguments. However, it is worth noting that some companies actually welcomed the proposed ruling as a good thing(!) that clarified the situation. It is not clear if they actually read the proposed rule, or just wanted to get their names on the FDA website as a respondent.