Thursday, 3 March 2011
European Union GMP Annex 11
The European Compliance Academy provided an analysis of the new text (and the related Chapter 4 update). And while it is generally useful in discussing the changes and the alignment / differences with Part 11, be sure tio read the actual text of Annex 11. The ECA analysis does have flaws that might lead to an incorrect interpretation.
For example, in their detailed analysis, the reviewer states....
"Furthermore there is now the need for requirements traceability throughout the life cycle, for the first time in a regulation a traceability matrix is required."
This is completely missing the point and mis-stating the regulation. What the regulation states is...
"4.4... User requirements should be traceable throughout the life-cycle."
There is no mention of a traceability matrix! A traceability matrix is just one way to document traceability. Another way to embed traceability via naming convention.
Monday, 19 July 2010
FDA To Conduct Inspections Focusing on 21 CFR 11
Wednesday, 25 February 2009
Standards: Incidents and Deviations
A colleague mentioned to me that he saw no real difference between Incidents and Deviations, because they are just different terms for unplanned events, one is an ITIL derived term and the other used more in the Life Sciences industry.
Without thinking too much I agreed, but then I started to ponder if this was really the case; it seemed to me there was a difference.
A deviation is an event that is contrary to an expected result or an incorrect operation; normally the event is a deviation from something stated in a plan.
An incident is an just event, but does not have to be a deviation; an incident is usually based on something in a procedure.
An example of incident that is not a deviation could be:
You may have a performance management procedure that states the system is configured to send an alert email when disk space drops below 10% free. When this happens, the email is sent and this is an Incident, but this is not a deviation, since it is an expected operation of the system.
However, in ITIL version 3, we have the "Request Fulfillment" process, which is designed to describe these incidents-that-are-not-deviations (as I have discussed previously), so maybe there really is no difference...although I still feel there is.
Tuesday, 28 October 2008
Validation: What is validation?
But then what about related terms such as Qualification, Verification and Testing? Why use a term such as validation when you can just say Testing?
Within the regulated Life Sciences sector, validation is more that just software testing. And we can show how Validation defines an overall framework incorporating Qualification, Verification and Testing.
Define Validation
First, let's define Validation...or rather let the FDA define it for us:
Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.
There are some key terms here that we should take note of:
documented evidence: We need to write stuff down that shows evidence of validation; it is not enough to go test something and then say it's validated.
specific process: This is what makes validation different from verification or testing. Validation is about the process, not just the tools (software) used to execute the process.
predetermined specifications and quality attributes: we need to define what the process should do and how before we execute the process.
Qualification, Verification and Testing
So Validation addresses the entire system, including software, underling hardware (infrastructure), business process, operating procedures, training, life cycle through to retirement.
The validation of a system will encompass one or more Qualification phases. These may focus on a life cycle phase (e.g. design phase, installation) or may focus on a specific component (e.g. infrastructure, support services).
A Qualification phase will comprise some activities such as planning, specification, design, analysis. One of these activities will be a Verification activity. There are different types of verification activity, depending on the scope of the qualification phase. For example a design qualification phase will generally have code review and design review as verification activities.
Obviously, one form of verification is Testing. For example, an Operational Qualification phase would include System Testing, Integration Testing, User Acceptance Testing, etc...as verification activities.
Here is a diagram that should put everything in context:
Friday, 19 September 2008
Standards: Incidents, Problems and CAPA
An analysis of FDA Warning Letters issued over the past few years shows some recurring themes (see previous posts). One of these that consistently stands out is the regulations cited more than any other; 21CFR Part 820.100 Corrective and Preventive Action closely followed by 21CFR 820.198 Complaint Files.
In this blog I want to highlight what CAPA is, and also place it in the context in terms of Incident Management and Problem Management, alongside Complaints.
Incident Management and Problem Management
ITIL has a clear explanation of Incident and Problem Management and the difference between them. It goes like this; consider the following analogy:
Every city has a stretch of road where accidents seem to occur on a regular basis; so called “accident black-spots”. When an accident happens, the police are usually the first on the scene, quickly followed by other emergency vehicles as required: ambulances, fire, tow truck, etc. The first order of business is to attend to the injured. Next is to get the traffic moving again.
This is the essence of Incident Management; it is reactive and looks for an immediate, short-term solution.
Somewhere, people are gathering information and analysing that accident, what may have caused it and how it may relate to other accidents which occurred along that same stretch of road. They analyse, among other things, traffic patterns, the time of day, weather conditions at the time, road signage. From this analysis, they seek to determine the ROOT CAUSE of the accidents and thus find a means of preventing accidents.
This is the essence of Problem Management; it is proactive and looks for a permanent solution to prevent further incidents.
Corrective and Preventive Action
FDA Guidance says the following;
Corrective action is a reactive tool for system improvement to ensure that significant problems do not recur.
and...
Being proactive is an essential tool in quality systems management. Succession planning, training, capturing institutional knowledge, and planning for personnel, policy, and process changes are preventive actions that will help ensure that potential problems and root causes are identified, possible consequences assessed, and appropriate actions considered.
So the focus in these activities is to find root causes and ways to stop problems happening in the future, rather than righting what has happened in the past.
It is important to understand that CAPA is not an Incident Management process - CAPA is all about Problem Management; it is the same as ITIL Problem Management, with both reactive (triggered by incidents/failures) and proactive (triggered by other sources) activities.
The Incident Management process is essentially addressed by the "Complaint Files" regulation for medical devices (and others focussing on manufacturing incidents and adverse events). Perhaps this is a source for the large volume of citations for violation of these regulations; companies lack an understanding of the interface between Complaints (incidents) and CAPA (problems), which is much easier to understand when viewed from the ITIL framework.
GAMP Honorable Mention
As a footnote to this, it is worth mentioning that GAMP4 did not really address either incident or problem management. This has been corrected in GAMP5 with the addition of the Operational Appendices O4 Incident Management and O5 Corrective and Preventive Action. Note how GAMP employs language recognisable to IT stakeholders (incidents) and regulatory stakeholders (CAPA), bridging the gap of understanding that may have existed before.
Sunday, 7 September 2008
Standards: What is "best practice"??
There are a lot of things out there that claim to be representing "best practice". Here is a brief list of the management systems or approaches that are commonly mentioned:
ISO 12207: aims to be 'the' standard that defines all the tasks required for developing and maintaining software.
ISO 20000: describes the best practices for service management
ISO 27001: specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented ISMS
ISO 9001: set of requirements for a quality management system.
ISO 13485: requirements for a comprehensive management system for the design and manufacture of medical devices.
ISO 15504: Software Process Improvement and Capability dEtermination is a "framework for the assessment of processes".
COBIT: a set of best practices (framework) for information technology (IT) management
ITIL: a set of concepts and techniques for managing information technology (IT) infrastructure, development, and operations.
GAMP: a series of Good Practice Guides on several topics involved in drug manufacturing
CMMI: a process improvement approach that provides organizations with the essential elements of effective processes.
COSO: a common definition of internal controls, standards, and criteria against which companies and organizations can assess their control systems
So what are the differences? Well as you may deduce, there not that many real differences underlying these publications. Some focus on service management, and others on software development, and others on system or process controls, but in their efforts to broaden their appeal, they actually overlap to such an extent that they can often be mapped process for process - the only real difference is the language used to describe the tasks.
Here is a typical example of mapping between ITILv2 and COBITv4. And remembering that ISO20000 is derived from ITIL, then there is a map from COBIT to ISO20000.
So whichever system you choose to adopt, you can make a fairly safe bet that you will also be covering many of the requirements from the other models.
ITIL Process | COBIT | ||
Process | Control Objective | COBIT Process | |
SERVICE LEVEL MANAGEMENT | DS 1 | DS 1.0 | Define and Manage Service Levels |
The SLM Process | DS 1 | DS 1.1 | Service Level Agreement Framework |
Planning the Process | DS 1 | DS 1.2 | Aspects of Service Level Agreements |
Implementing the Process | DS 1 | DS 1.2 | Aspects of Service Level Agreements |
The On-going Process | DS 1 | DS 1.5 | Review of Service Level Agreements and Contracts |
SLA contents and key targets | DS 1 | DS 1.2 | Aspects of Service Level Agreements |
Key Performance Indicators and metrics for SLM efficiency and effectiveness | DS 1 | DS 1.4 | Monitoring and Reporting |
FINANCIAL MANAGEMENT FOR IT SERVICES | PO 5 | PO 5.0 | Manage the IT Investment |
Budgeting | PO 5 | PO 5.1 | Annual IT Operating Budget |
Developing the IT Accounting system | PO 5 | PO 5.1 | Annual IT Operating Budget |
Developing the Charging System | DS 6 | DS 6.2 | Costing Procedures |
Planning for IT Accounting and Charging | DS 6 | DS 6.1 | Chargeable Items |
Implementation | DS 6 | DS 6.0 | Identify and Allocate Costs |
Ongoing management and operation | DS 6 | DS 6.3 | User Billing and Chargeback Procedures |
CAPACITY MANAGEMENT | DS 2 | DS 2.0 | Manage Third-Party Services |
The Capacity Management process | DS 3 | DS 3.0 | Manage Performance and Capacity |
Activities in Capacity Management | DS 3 | DS 3.7 | Capacity Management of Resources |
Costs, benefits and possible problems | DS 3 | DS 3.7 | Capacity Management of Resources |
Planning and implementation | DS 3 | DS 3.0 | Manage Performance and Capacity |
Review of the Capacity Management process | DS 3 | DS 3.3 | Monitoring and Reporting |
Interfaces with other SM processes | n.a. | n.a. | n.a. |
IT Service Continuity Management | DS 4 | DS 4.0 | Ensure Continuous Service |
Scope of ITSCM | DS 4 | DS 4.1 | IT Continuity Framework |
The Business Continuity Lifecycle | DS 4 | DS 4.1 | IT Continuity Framework |
Management Structure | DS 4 | DS 4.1 | IT Continuity Framework |
Generating awareness | DS 4 | DS 4.1 | IT Continuity Framework |
Interfaces with other SM processes | n.a. | n.a. | n.a. |
AVAILABILITY MANAGEMENT | DS 4 | DS 4.0 | Ensure Continuous Service |
Basic concepts | DS 4 | DS 4.2 | IT Continuity Plan Strategy and Philosophy |
The Availability Management Process | DS 4 | DS 4.0 | Ensure Continuous Service |
The Cost of (Un)Availability | PO 9 | PO 9.4 | Assess Risks |
Availability Planning | DS 3 | DS 3.2 | Availability Plan |
Availability improvement | DS 4 | DS 4.4 | Minimising IT Continuity Requirements |
Availability measurement and reporting | DS 3 | DS 3.3 | Monitoring and Reporting |
Availability Management tools | DS 3 | DS 3.4 | Modeling Tools |
Availability Management methods and techniques | DS 3 | DS 3.0 | Manage Performance and Capacity |
THE SERVICE DESK | DS 8 | DS 8.0 | Assist and Advise Customers |
Overview | DS 8 | DS 8.1 | Help Desk |
Implementing a Service Desk infrastructure | DS 8 | DS 8.1 | Help Desk |
Service Desk technologies | n.a. | n.a. | n.a. |
Service Desk responsibilities, functions, staffing levels etc | PO 4 | PO 4.4 | Roles and Responsibilities |
Service Desk staffing skill set | PO 7 | PO 7.4 | Personnel Training |
Setting up a Service Desk environment | PO 8 | PO 8.1 | External Requirements Review |
Service Desk education and training | PO 7 | PO 7.4 | Personnel Training |
Service Desk processes and procedures | DS 8 | DS 8.0 | Assist and Advise Customers |
Incident reporting and review | DS 5 | DS 5.10 | Violation and Security Activity Reports |
INCIDENT MANAGEMENT | DS 10 | DS 10.0 | Manage Problems and Incidents |
Goal of Incident Management | DS 10 | DS 10.0 | Manage Problems and Incidents |
Scope of Incident Management | DS 10 | DS 10.1 | Problem Management System |
Basic concepts | DS 10 | DS 10.1 | Problem Management System |
Benefits of Incident Management | DS 10 | DS 10.1 | Problem Management System |
Planning and implementation | DS 10 | DS 10.1 | Problem Management System |
Incident Management activities | DS 10 | DS 10.3 | Problem Tracking and Audit Trail |
Handling of major Incidents | DS 10 | DS 10.2 | Problem Escalation |
Roles of the Incident Management process | DS 10 | DS 10.0 | Manage Problems and Incidents |
Key Performance Indicators | DS 10 | DS 10.3 | Problem Tracking and Audit Trail |
Tools | DS 10 | DS 10.1 | Problem Management System |
PROBLEM MANAGEMENT | DS 10 | DS 10.0 | Manage Problems and Incidents |
Goal of Problem Management | DS 10 | DS 10.0 | Manage Problems and Incidents |
Scope of Problem Management | DS 10 | DS 10.1 | Problem Management System |
Basic concepts | DS 10 | DS 10.1 | Problem Management System |
Benefits of Problem Management | DS 10 | DS 10.1 | Problem Management System |
Planning and implementation | DS 10 | DS 10.1 | Problem Management System |
Problem control activities | DS 10 | DS 10.3 | Problem Tracking and Audit Trail |
Error control activities | DS 10 | DS 10.3 | Problem Tracking and Audit Trail |
Proactive Problem Management | DS 8 | DS 8.5 | Trend Analysis and Reporting |
Providing information to the support organisation | DS 8 | DS 8.5 | Trend Analysis and Reporting |
Metrics | DS 10 | DS 10.0 | Manage Problems and Incidents |
Roles within Problem Management | DS 10 | DS 10.0 | Manage Problems and Incidents |
CONFIGURATION MANAGEMENT | DS 9 | DS 9.0 | Manage the Configuration |
Goal of Configuration Management | DS 9 | DS 9.0 | Manage the Configuration |
Scope of Configuration Management | DS 9 | DS 9.0 | Manage the Configuration |
Basic concepts | DS 9 | DS 9.1 | Configuration Recording |
Benefits and possible problems | DS 9 | DS 9.1 | Configuration Recording |
Planning and implementation | DS 9 | DS 9.1 | Configuration Recording |
Activities | DS 9 | DS 9.0 | Manage the Configuration |
Process control | DS 9 | DS 9.0 | Manage the Configuration |
Relations to other processes | n.a. | n.a. | n.a. |
Tools specific to the Configuration Management process | n.a. | n.a. | n.a. |
Impact of new technology | n.a. | n.a. | n.a. |
Guidance on Configuration Management | n.a. | n.a. | n.a. |
CHANGE MANAGEMENT | AI 6 | AI 6.0 | Manage Changes |
Goal of Change Management | AI 6 | AI 6.0 | Manage Changes |
Scope of Change Management | AI 6 | AI 6.0 | Manage Changes |
Basic concepts | AI 6 | AI 6.1 | Change Request Initiation and Control |
Benefits, costs and possible problems | AI 6 | AI 6.2 | Impact Assessment |
Activities | AI 6 | AI 6.0 | Manage Changes |
Planning and implementation | AI 6 | AI 6.0 | Manage Changes |
Metrics and management reporting | AI 6 | AI 6.2 | Impact Assessment |
Software tools | AI 6 | AI 6.3 | Control of Changes |
Impact of new technology | n.a. | n.a. | n.a. |
RELEASE MANAGEMENT | AI 6 | AI 6.0 | Manage Changes |
Goal of Release Management | AI 6 | AI 6.7 | Software Release Policy |
Scope of Release Management | AI 6 | AI 6.7 | Software Release Policy |
Basic concepts | AI 6 | AI 6.7 | Software Release Policy |
Benefits and possible problems | AI 6 | AI 6.7 | Software Release Policy |
Planning and implementation | AI 6 | AI 6.7 | Software Release Policy |
Process control | AI 6 | AI 6.7 | Software Release Policy |
Relations to other processes | n.a. | n.a. | n.a. |
Tools specific to the Release Management process | n.a. | n.a. | n.a. |
Guidance for successful Release Management | AI 6 | AI 6.7 | Software Release Policy |
Monday, 25 August 2008
SAS 70 Reports: A sheep in wolfs clothing.
OK, there are many things wrong with that first paragraph. It seems there is a huge misconception about SAS 70 and what it is, and for me this is dangerous because a lot of folks overestimate the usefulness of SAS70 reports and indeed it is generally seen as a means of assuring compliance with Sarbanes-Oxley section 404 - which it does not, but we'll get to that.
Let's start with the idea that companies comply with SAS 70.
POINT 1: YOU CANNOT COMPLY WITH SAS 70
The way a SAS 70 audit works is this;
- the service provider states some objectives they hope to achieve using "controls";
- the service provider states the controls that they use to meet their objectives;
- an independent auditor looks at evidence that the stated controls are operating and effective at meeting the stated objectives.
POINT 2: SAS 70 DOES NOT ASSURE QUALITY OR BEST PRACTICE
As noted in point 1, the audit reports a companies compliance with their own procedures/controls. Now those controls may be slow, cumbersome, expensive, outdated, etc. But if they meet the objective stated by the company then there is nothing to report.
POINT 3: SAS 70 DOES NOT REQUIRE A MINIMUM SET OF CONTROLS
As stated on the SAS70.com website:
"Since service organizations are responsible for describing their controls and defining their control objectives, there is no published list of SAS 70 standards. Generally, the control objectives are specific to the service organization and their customers."
However, there are certain headlines that do need to be addressed
- Control Environment.
- Risk Assessment.
- Control Activities.
- Information and Communication.
- Monitoring.
I think there is some inferred assumption that the "independent auditor" who performs the SAS 70 audit will use some best practice framework as a benchmark (such as COSO/COBIT), but this is not required.
POINT 4: THERE IS NO PASS OR FAIL...THERE IS NO CERTIFICATION
So as we can see from the preceding points, you cannot pass or fail because there are no independent criteria against which to score, and you cannot be certified for the same reason.
What can happen is that the report will describe some control that is operated incorrectly or ineffectively such that its defined objective is not met.
So finally what use is a SAS 70 report
Well it is promoted as a solution to confirming Sarbanes-Oxley compliance
The Wikipedia article even says:
"SOX heightened the focus placed on understanding the controls over financial reporting and identified a Type II SAS 70 report as the only acceptable method for a third party to assure a service organization's controls."
This is nonsense. A SAS 70 report is a very useful tool for communicating audit results. It is not a statement of compliance with SOX or any other regulation .
SAS 70 is useful and misleading at the same time - and which it is depends on how it is sold to you and how much you understand about the process.
A "white paper" highlight many of the failings of SAS 70 even exists; SAS 70: The Emperor Has No Clothes. This was written by an ISO17799 consultancy, so they have their own agenda to push, but I do not and have independently come to the same conclusions (as have other bloggers).
So what is the benefit of a "clean" SAS 70 Type II report. Here are a couple of thoughts:
- From sas70.com; "Many SAS 70 audit engagements result from user organizations making repeated requests of the service organization in order to gain an understanding of the internal control environment at the service organization." So it can save time when end user organisations want to audit a service provider. It is worth remembering that a SAS 70 report is meant to be an auditor-to-auditor communication tool.
- If a service company is actually making an effort to apply CoBIT or some other defined best practice, the audit can serve to highlight to them areas for improvement, but only if they ensure that they provide the appropriate criteria to the auditor (see point 2 above).