The paper may not be copied, duplicated, changed or distributed without the written permission of the authors. Appreciative Inquiry: An Innovative Approach to Evaluation of Organizational Change in a Transnational Pharmaceutical Company

Download 105.95 Kb.
Date conversion07.05.2018
Size105.95 Kb.
  1   2
The paper may not be copied, duplicated, changed or distributed without the written permission of the authors.

Appreciative Inquiry: An Innovative Approach to Evaluation of Organizational Change in a Transnational Pharmaceutical Company
© Bernard J Mohr, Elizabeth Smith, Jane M Watkins
This paper tells the story of a collaboration between SmithKline Beecham and The Synapse Group. Inc. – how we came to work together on a ground breaking project for both parties; how the project unfolded; and, the outcomes and learning both parties achieved from the experience. It concludes with an Epilogue that sets the story in the context of Appreciative Inquiry and the emerging paradigm.
During 1998, the Research and Development division of SmithKline Beecham Pharmaceuticals undertook an evaluation of a major and innovative simulation- based training programme, the SB Discovery Simulation.
This training programme had been designed to help scientific leaders and key contributors work effectively within the new drug discovery research paradigm. Over the course of three intensive days, participants worked in research teams utilising a dynamic computer model of the drug discovery process. The aim was to create a realistic learning environment in which a drug company attempts to maximise its portfolio of research efforts over a ten-year period.
At the time of this evaluation process, 480 people from SmithKline Beecham in the US and the UK had attended the programme - a critical mass of the original target population. End of course evaluations were conducted for each programme. The data collected was largely favourable, with participants reporting an increase in knowledge and understanding in a number of areas. Suggestions for improvements were acted upon wherever appropriate, so that the programme was continuously refined during the roll-out.

The Organisation Development (OD) group, who had led the design and delivery of the Discovery Simulation in conjunction with senior Discovery research scientists, were satisfied up to a point that the simulation now worked well and consistently elicited positive responses from those who attended. However, they had made a major investment in this programme, and decided it was important to conduct an in-depth evaluation study to ascertain whether it had made a significant and lasting impact on the organisation. If such an impact could be demonstrated, they also wished to determine how to further capitalise on this investment.

To find an outside evaluator, SmithKline Beecham put out an invitation to tender (a process referred to in the US as an RFP or Request for Proposal) to several consulting groups which they knew would offer different approaches, but still with the expectation that they would conduct a reasonably traditional evaluation process where the consultants interview people in the company, compile the data and give the client a report of their findings. The usual report includes the strengths and weaknesses of the simulation and its outcomes, and recommendations from the consultants for next steps.
One of the companies that received the invitation to tender was The Synapse Group, Inc., a consulting firm based in Portland, Maine. The Synapse consultants responded with a proposal that turned traditional evaluation thinking on its head. The proposal suggested the use of Appreciative Inquiry to conduct a "valuation process," sometimes called "Embedded Evaluation". They believed that this approach could give SB information about the strengths of the programme in ways that would create positive forward momentum - by taking the best of what had happened and using it to create a collective image of a desired future as a basis for moving the programme in the direction of its best practices.

Stories contain a richness and depth of experience not typically found in more traditional reviews of past work. Because of this richness, and because stories are so central to the practice of Appreciative Inquiry, we have chosen to tell the story of this project in two voices as a way of sharing our memories with you. The consultant (Synapse) and the client (SB) initially wrote their separate stories without reference to each other. The following story has been created by interweaving pieces from these two original narratives.
Getting Started

SB: Our story, as the client, begins in June 1997, while we were still in the midst of delivering the Discovery Simulation. We had agreed we wanted to conduct an in-depth evaluation study. Ideally we wanted to have it completed by the end of the year to enable us to make timely decisions about any re-development and re-launch of the programme in 1998. A team of three of us who had been closely involved with the design and delivery of the programme formed a committee to produce an invitation to tender for the evaluation project, to interview a short list of consultants, and then to select and oversee the work of the chosen consultant.

The invitation to tender described the following aims of the evaluation study:

(i) to assess whether the learning reported at the end of the programme had translated into changes of behaviour and improved performance back in the workplace;

(ii) to reinforce the learnings from the programme;

(iii) to gain insights into the most effective use of simulation technology in order to apply best practice to the design and development of future programmes;

(iv) to collect data that would help to inform decisions about further applications of the existing Discovery Simulation, and potential follow-up activities.
Synapse: As the consultant our story begins in July of 1997 with a phone call from a colleague who was thinking outside the box. His call caught me doing some catch-up reading in the makeshift office on the second floor of our summer house. It was great to hear his voice, until he told me the purpose of his call. “Would you be interested in responding to an RFP [Request for Proposal] for an evaluation study?” he asked.

Perhaps it was the hamburger I was still digesting but the two terms “RFP” and “Evaluation Study” gave me indigestion . While I have always found great value in jointly crafting a customised journey of learning and change with a client, the term “RFP” can all too often mean participating in a sham process designed to satisfy internal bidding regulations while all along the client has already determined who the best person is for the job. Worse yet, my experience to date with “Evaluation Studies” is best captured by a person being evaluated who said, “Look, for the last year, we have been putting our blood and guts into creating a success here – and now you come along with a mandate from upper management to tell them if what we are doing is any good? And you want our participation? Give me a break!”

So, it was with some considerable reluctance that I told my colleague, “Sure, have them send me the RFP”. I was almost ready to say goodbye, when he added “Actually, I was wondering whether you might do this as an appreciative process?” This took me by surprise! How, I asked myself, could the positive perspective of Appreciative Inquiry (AI) be applied to a fundamentally critical review process and still produce “valid” results?
My practice had for the last several years been moving away from deficit based situation analyses, towards a focus on innovation through understanding and expanding that which is working well in a system. But, I obviously hadn’t yet shifted my own paradigm sufficiently to imagine the possibility of an evaluation which didn’t emphasise primarily the gaps or, as we euphemistically call them, the “opportunities for improvement”. And so began an incredible journey which would stretch the practice boundaries of the emerging field of Appreciative Inquiry and involve us with a willing client to explore somewhat uncharted seas. Thank goodness for colleagues who can see possibilities!
SB: As we completed the invitation to tender we realised that we were asking a lot of one consultant and indeed from a single approach to evaluation. We had a broad range of objectives, and good reasons for wanting to accomplish all of them in this study! We thought we might ultimately have to compromise on some, but we weren't ready to do so just yet and wanted to get a range of views on how much could be accomplished.
For this reason we cast a wide net in sending out the invitation to tender, deliberately selecting consultants from both the UK and the US who represented a range of contrasting approaches. The consultants we contacted included the Synapse Group, whom one of our selection team knew professionally, and had already established that they might have an interest in tendering.

Synapse: By the time the RFP arrived in late August, the then head of the SB Pharmaceuticals R&D OD group had participated in our firm's annual summer gathering in Maine. At that meeting we had described some of the theory and research underlying Appreciative Inquiry to him and we had got to know a bit about the organisation (SB) that was to become our client. The RFP described, in traditional but extremely clear and competent form, the company's wish to evaluate the success of their computerised simulation based workshop for senior scientists. But this RFP was different in other ways. The RFP had a clarity of focus and tone of openness to innovation that was exhilarating. Clearly someone had given a lot of thought to this document! My attitude towards the whole project shifted upwards significantly.
Intrigued with the possibility of conducting an evaluation from an appreciative perspective I also knew that what we might propose to the client not only ran counter to prevailing assumptions about good evaluation, but to my knowledge had not been done quite this way before. In fact I was worried that even the language of "appreciation" would set off warning bells in a client who was focused on "evaluation". I realised I needed help with shifting from a traditional approach to evaluation to one guided by the philosophy and principles of AI.

Fortunately, Jane Watkins, a Synapse consulting associate and a partner in our NTL sponsored workshops on AI, had recently done developmental work at Cambridge in something she called “Embedded Evaluation.” I called her, explained the situation and invited her to join me in a preliminary client teleconference, with the goal being to see if the client might be interested in a "Valuation" process rather than an "Evaluation" process. To our surprise and great delight, in this very first contact with the client project team, they indicated both understanding of the drawbacks of traditional evaluation while expressing enthusiasm for an Appreciative “Embedded Evaluation” approach. The sun seemed to shine more brightly and my heart beat faster. We were starting to roll.

The clients had expressed their interest and openness, although they hadn’t yet made any commitments . The ball was back in our court. We had to move from the conceptual to the practical stage. The client needed a proposal, and they needed it yesterday of course! Over the next 8 months we were to revise the original proposal three more times - in a continuing journey of partnership with our clients and a commitment to ongoing adjustment in a process which would increasingly become a forward focused intervention in its own right, rather than the backward look of a typical evaluation study. Our first proposal had six phases as shown below:


  1. Consultants familiarise themselves with the company and the Discovery Simulation programme

through written data and some key informational interviews.

  1. With SB, jointly identify a project reference group that will be our primary planning and guidance team

  2. Meet with project reference group for briefing to:

• familiarise reference group with Appreciative Inquiry (including review of related scientific research,

underlying theory , an experiential activity plus Question and Answer);

• clarify project goals, roles, approach and key assumptions;

• describe draft work plan and identify any obstacles or changes required.

  1. Select SB “interviewers”

  2. Train SB “interviewers”

  3. Create interview protocol customised to SB Discovery Simulation

  4. Complete stakeholder scan and assign interview responsibilities


  1. SB “interviewers” conduct 8-10 interviews per person


  1. Compile data from interviews

  2. Identify themes


  1. Identify the programmes strengths, suggest areas to redesign, and develop tentative next steps

  1. Brief reference group on findings and validate preliminary recommendations for applicability in SB culture


  1. Define suggested next action steps (based on reference group validation of data and preliminary recommendations.)

  1. Prepare final report and present to client

SB: The initial contacts we had with the Synapse Group both intrigued and puzzled us! The Appreciative Inquiry and associated Embedded Evaluation approach was new to us all, and the detailed theoretical background provided in the RFP response was a lot to take in. However, as a group of OD and learning specialists we were enthusiastic about finding out more and potentially adding a new "tool" to our "kit bag". We also felt that in Appreciative Inquiry we might have found an approach that was broad enough to address most, if not all, of our objectives – something that was going to prove to be more than just an evaluation of the programme.

In our ensuing discussions with the consultants we began to grasp the essence of what AI is all about. The idea of looking for what is exceptional in something and seeking to do more of that rather than looking for what is wrong and fixing it ran completely counter to our classic views of evaluation – but perhaps that was the appeal, since we had already done plenty of the latter as we evolved the programme! We felt that this approach would be particularly useful in helping us to tease out the key positive elements of the simulation experience – ensuring we captured this learning for future simulation programmes in SB. It also seemed to offer a way of reinforcing and building on the learning and experience people had taken from the workshop.
We had two major concerns with the approach. The first was that we were working with a group of Discovery Research scientists, whose training, ways of thinking and approach to their work are exceedingly analytical and critical, which to our thinking at that time seemed antithetical to the AI approach. We were concerned that they might feel we were avoiding looking at the negative, simply seeking positive reinforcement and generally being too subjective and not rigorous enough in our approach. The second was that the proposal called for all the alumni of the Discovery Simulation (around 480) to be part of the study, participating both as interviewers as well as interviewees. This was potentially a major organisational intervention which we did not feel could be supported at that time, and we also did not feel able to ask for any individual Discovery scientist to give more than an hour apiece to the study.

In discussion with the consultants we were able to address the second concern by scaling down the approach. Instead of having Simulation alumni conduct the interviews we agreed to assemble an evaluation team comprising the two consultants, the three members of the SB evaluation selection committee, two other SB representatives from HR and OD respectively and a former Discovery Directory who had been acting as consultant to and faculty on the programme. Between us we would interview approximately 20% of the programme participants. Thus instead of completely "outsourcing" the effort to the consultants, as we had originally intended, we made ourselves part of it (and in doing so committed a much greater amount of our time and effort), but we stopped short of increasing the resource required of the Discovery group.

We continued to grapple with the first concern. As we deepened our understanding of AI in relation to the scientific approach we came to realise the two approaches are not the antithesis of each other, since appreciating what is good does not preclude applying constructive and rigorous critique. We also came to understand how we could use interview questions about wishes for the future to surface the criticisms, but in a way that immediately turned them into recommendations that moved things forward. Nevertheless, we felt that the AI approach was likely to encounter some resistance in what was essentially a problem-solving culture – but we decided to take the risk and go with it!
Synapse: Within a month after the first proposal and following several more transatlantic telephone calls, we had collaboratively determined the changes that would be needed in our original proposal. A couple of weeks later we had begun work. The major adaptations we made to our first proposal from our perspective included: (i) deciding to proceed without a steering committee (called project reference group in the first proposal), (ii) using the external consultants as interviewers in addition to using the internal project team in that role and iii) scaling the number of interviews back to about 100 from the original proposal of doing all 480 participants that had completed the training programme.

Where we had assumed the need for a steering committee composed of senior managers who would be “barrier busters” and “champions” for this project, our clients explained that this was a role better filled by their simulation “design team” – the people who had developed the content for the computerised simulation, the impact of which we were to evaluate. Although we had proposed to limit our role to the development and support of the internal SB team of scientist interviewers, our clients explained that the time of their scientists was too scarce and asked us to bolster the SB interviewer team by conducting some of the interviews ourselves. Frankly, this was a hard pill for us to swallow. Even though it meant more billable time for us as consultants (always nice), it went against our core beliefs that processes such as this are much more effective when implemented by the client system. The clients however were persuasive, and firm! We agreed to participate in the interviews – and as it turned out, we were absolutely delighted with that decision.

Scaling back from a plan to interview all 480 scientists who had participated in the simulation to a decision to interview about 100 people was also a problematic choice in our view. From an AI perspective, the likelihood of change in the client system is directly correlated with the number of people directly engaged in appreciative interviews. Naturally we wanted to interview 100% of the target population in order to maximise the impact for the client system. But, we soon realised that this would be biting off more than the client system could chew for the moment.
SB: A final change from the client's perspective was realising that we had to accept that this approach was going to take longer than we had originally scheduled for the project. We pushed our deadline back by 3 months to allow ourselves enough time for our preparation as AI interviewers and to partner in development of the customised protocol.
SB: The compromise project plan then unfolded. We began by providing the consultants with as much information as possible about the Discovery Simulation, and they were each able to spend some time observing part of a programme.

Our first steps also included informing the Discovery Simulation Design Team of the approach we would be taking. This was a team of senior scientists who had designed the Discovery Simulation and acted as faculty throughout the roll-out of the programme. Their main interest was in finding out the impact of the Simulation experience back in the workplace and to gather information to support and inform a second phase roll-out. They broadly supported the approach and were very interested in seeing the results, but were not ready or able to participate in the interviewing team. However, they did agree to have the interview protocol tested on them before it went out to the wider audience, which was a very valuable step in our preparation for the study. (This data was included in our overall data set, which initially seemed strange to us, as it seemed to run counter to "pure evaluation" methodology. We came to understand that in this approach, all data can be considered and contributes to the richness of the whole).

Phase II: Select And Train Interviewers From Client System
Synapse: By mid – December we had jointly identified the small team of SB people (our “core group”) who would join us in creating the customised protocol, conducting the interviews, analysing the data and writing the final report. The group was drawn from both sides of the Atlantic since the interviewee group itself was transnational.
SB: The next step was for the SB team to deepen its understanding of the approach. We agreed to accomplish this in two workshops, one for the US-based team and one for the UK-based team, where the consultants would share with us more of the theoretical background to AI and their experience of its application, and give us an opportunity to explore it further, both through discussion and practical exercises. Face to face time is always at a premium and its use has to be maximised at SB! Therefore we also built time into these same workshops to draft and pilot the interview protocol which would be used in the evaluation study. The goals for the first two day "AI Orientation Workshop” with the UK members of our core group were agreed to as follows:

By the end of this workshop, to have:

1. Clarified the difference between the AI/Embedded Evaluation approach and traditional evaluation approaches.

* Introduce Appreciative Inquiry

* Discuss the shifting paradigm

* Examine the theory of change that underlies this approach

* Explore Appreciative Inquiry principles and practices
2. Agreed on desired outcomes and critical success factors for this evaluation process - and how we will get there.

* Discuss the desired outcomes and critical success factors

* Discuss the major phases of the AI/ Embedded Evaluation approach

3. Jointly developed a customised draft interview protocol for gathering data using this approach - and have practised it.

* Agree importance of how we capture data and what data to record

* Create a draft customised interview protocol

* Practise AI interviewing skills

4. Jointly created a plan for collecting and analysing the evaluation data.

* Identify Stakeholders

* Agree key steps in data collection

* Decide how the data will be collected, organised and compiled

5. Agreed on next steps (actions, responsibilities and dates) for all of us.
Drafting the protocol was an interesting experience for us because our previous training and inclination kept taking us back to more classic evaluation questions that were at odds with the appreciative approach. We were constantly struggling to remain within the new mindset. We also wanted to add more and more questions to ensure that we got at everything we were looking for in the evaluation study objectives – with the danger that the interview protocol would become unwieldy.
Testing the protocol was even more interesting and challenging. Introducing the interview methodology and then keeping up the AI perspective throughout the interview was something we found needed practice. Dealing appropriately with negatives without seeming to avoid the issue, and drawing out stories from people whose memories of the Simulation were several months old, were two particular challenges. However, the experience was also more rewarding than we expected. We found that mostly the test interviewees responded well to the approach (even though they commented that it was different from what they were used to), and were able to describe vivid pictures and examples. It was also enjoyable to be "allowed" to some extent to participate in the dialogue with them, rather than take the usual more aloof interviewer's stance! Finally, it was interesting to note that the questions that yielded the richest and most useful data tended to be adaptations of the generic AI questions rather than the more specific ones we had added! By this stage we were getting really enthusiastic about the use of this approach.

Synapse: The workshop was successful. Our participants (the UK SB interview team whom we called our ‘core group’) walked away with an enthusiasm and understanding of what we were trying to accomplish that both astonished and excited us.

Now we needed to prepare for the second workshop in the USA – which meant some homework for the Synapse consulting team.
We needed to (i)refine a bit more the protocol that had been developed and piloted in the last few days (ii) create the detailed summary sheets to be used at the end of each interview. Additionally our clients (the workshop participants) asked us to (iii) modify the upcoming US orientation workshop with more time being devoted to discussion of the emerging paradigm and its connection to AI theory, as well as (iv) including more time to practise the interviews using the customised protocol.
By mid January we had conducted the second version of the orientation workshop (this time in the US) and the interviewees had been contacted and scheduled. With some slight modifications to the protocol and the supporting documentation, we were ready to start the Inquiry process.

The protocol we developed is shown below:

1. Before we get to the questions about the Discovery Simulation, I’d like to know a bit about your experience here at SmithKline Beecham and I’d like to do it in the style of Appreciative Inquiry. Could you tell me a story about a time at SB when you felt particularly excited, creative, productive? What happened? Who was involved? What part did you play?

2. Now I’d like to ask you about your

  1   2

The database is protected by copyright © 2017
send message

    Main page