Open and Distance Learning: Revisiting Planning and Management quality assurance in odl institutions badri n koul Introduction



Download 59.52 Kb.
Date conversion01.08.2017
Size59.52 Kb.
Open and Distance Learning: Revisiting Planning and Management
QUALITY ASSURANCE IN ODL INSTITUTIONS
Badri N Koul
1. Introduction
The quality concerns of the ODL pioneers at the initial stages of the innovation called ODL (associated with the establishment of the British Open University in 1969) were focused on its socio-academic credibility vis-à-vis the time honored face-to-face system of learning/teaching, wherein standards, not quality, was the watchword—adequate institutional infrastructure, appropriately qualified staff and in relation to a programme on offer, prescribed entrance qualifications, curricular content, duration of studies, a scheme of educational transactions and a scheme of learner evaluation reflected the standards. ODL institutions, under pressure to gain a foothold on the conventional turf, towed the line, but lost no time in identifying new criteria—i) the process of developing course materials, ii) the nature of course materials themselves, iii) the provision and practice of learner support services including the nature of pedagogic transaction/interaction and iv) flexibility in relation to entrance qualifications—to redefine standards in their fledgling enterprise. The last two of these continue to be questioned in circles/societies that find it difficult to see the paradigmatic shifts that have been taking place in the institution, philosophy and purpose of education over the past four decades. By the 1980s, ODL operations had reached a level of maturity that helped practitioners emphatically articulate ODL-specific criteria (socio-philosophic, systemic and transactional) that defined quality in ODL in a way that accommodated the notion of standards within the concept of quality assurance and the need to tow the line of conventional criteria was seen fading away gradually. By the end of the 1980s and early 1990s, links between funding and accreditation became more compelling than ever, consequent upon which quality assurance protocols pertaining to both the face-to-face and open distance education were developed and applied/used by many countries, mostly the developed ones. This necessitated the creation of mechanisms, such as accreditation bodies, quality assurance cells, etc., that maintain and get these protocols activated. Before these processes could settle and established practices emerge, in mid 1990s, information and communication technologies (ICTs) added a new and epoch making dimension to ODL operations—the first on-line courses were developed and offered through the web; and for the last ten years we have been watching in amazement the overwhelming impact that ICTs have made and are making on all the walks of life including education. Once again, the ODL systems have not only to revisit their quality assurance protocols, but also to identify the new quality issues imposed by contemporary ICT applications and revise quality assurance strategies, which have significant implications for planning and management in ODL.

2. Why quality assurance in ODL?
The above brief review of quality concerns in ODL does not outline the ‘why?’ of quality assurance in the present context explicitly. New compulsions1 have come up for consideration:


  1. Pressure for enhanced services from different constituents of the state and the society is increasing and the institution of education is hard pressed, for want of funds and other resources, to make adjustments to meet these pressures. This situation is pitching institutions against each other for funds and learner registrations; only quality dispensation can ensure institutional survival.

  2. Demand for expanding access to programmes being offered in higher education and for programmes that are relevant to the employment market and labour force is increasing exponentially. Only the quality courses and training modules that satisfy learner/customer demands/needs, return the value of the money spent on them and add value to investments will sell and survive.

  3. ICT infrastructure is uneven and the related human resources and expertise differ from country to country. Clearly, greater the dependence on ICTs, the greater is the inequity in access. Consequently, a discriminatory educational provision is emerging across-the-board. To undo the effects of this situation, institutions in developing countries particularly need to lift their standards and improve the quality of their educational transactions.

  4. Existence of and access to ICTs does not ensure its effective utilization for want of adequate infrastructure, enabling legislation and policy framework as well as trained personnel. Undoing these disabilities is tantamount to quality enhancement, which is a must for every institution as ICTs are bound to pervade all educational operations.
  5. A lack of the required national and regional capacity for promoting and implementing ODL operations leads to increasing dependency on developed countries, which may not hesitate selling sub-standard educational ware to unsuspecting third-world recipients. Another aspect of this issue is the phenomenon of aggressively publicized web-based cross-border education—‘what is being dispensed and to whom’ are questions that deserve attention. Further, growth in the number and diversity of provider institutions (thanks to ICT revolution) causes variation in costs and quality of programmes being offered. This points to the need for regional and international accreditation bodies and the related procedures in orders to protect the interests of learners, which in turn points to the changing roles of the national governments and the regional bodies concerned.


  6. In many developing countries, the quality and effectiveness of ODL remains suspect (partly because of the conventional mindset and partly because of the known weaknesses of the local initiatives) among the academics as well as the employer/society. In order to boost the socio-academic credibility and so the status of ODL programmes/operations, their quality has to be improved at every cost.

  7. It is not unusual for academics in dual-mode institutions to resist the development and integration of ODL programmes with on-campus courses/programmes as making inputs in this area is seen as an ‘add-on’ to their routine responsibilities. Lack of training in and aversion to the use of technology is the other cause of this resistance. In order to remove these bottlenecks, institutions need to create quality assurance cells to mould the existing staff for multi-functionality and create a need for them to mobilize their services to complement each other’s inputs.

  8. In many dual-mode institutions, the existing financial management, faculty and the support staff are geared to working in and for the traditional on-campus course delivery-- reverse cases2 are also in evidence now. Effective switch over to a new/different system requires institution wide fundamental changes. Unless there are quality assurance cells in operation within the institution concerned, the changeover is going to be messy and slipshod resulting in poor dispensation for the new learner populations.
  9. Overall, the characteristic features of didactic transactions are changing significantly in the wake of the ICT revolution, requirements of cross-border education and the increasing learner mobility. This necessitates reorientation of learners, academics, educational administrators and the providers of student support services. This need, however, is not likely to be attended to unless quality assurance mechanisms are in place and in operation.

These and many more concerns (especially those emerging from diverse ICT applications, globalization and the general thrust of educational democratization) make it necessary that quality in ODL products, processes and outcomes is assured locally, regionally and universally, so that a learner gets the worth of his/her money, time and effort. It is a social obligation that ODL institutions must fulfil at any coast.


3. Approaches to quality assurance (QA)
Partly because quality assurance in the educational enterprise is a relatively recent phenomenon and partly because quality related perceptions and concerns differ from place to place, different institutions/countries have developed different approaches to quality assurance. Some of the better known approaches are outlined below.
Baldrige approach: Promoted by the American Society of Quality since 1987, this approach advocates an integrated approach to the management of organizational performance with a view to progressively improving the value of education to learners and other stakeholders, the institutional capabilities and effectiveness and thus the overall quality of education.

It advocates institutional evaluation3 on the basis of seven categories of criteria used to assess and grade the levels of excellence in institutional performance. The categories are: i) leadership, ii) strategic planning, iii) focus in terms of learners, stakeholders and the market, iv) management of data, analysis, measurement and knowledge, v) focus in terms of faculty and the staff in general, vi) management of institutional processes, and vii) the outcome of organizational performance.

ISO 9000:2000: The International Organization for Standardization (IOS) issued quality system management standards, the ISO 9000 standards, in 1987 for quality control and reliability of manufactured products. Though their application in the field of education and training started in the 1990s, the ISO approved guidelines for the application of ISO 9000:2000 (the latest version) in education were approved later in 2002.

For the evaluation of educational institutions, it advocates 21 criteria under four concerns: responsibility of the management; management of the resources; product realization; and the measurement, analysis and improvement. For example, under the very first of these assessment focuses on i) the commitment of the management, ii) customer focus, iii) quality assurance related policies, iv) planning, v) assignment of responsibilities and powers, vi) communication and vi) review practices. An institution desirous of ISO certification should display its compliance to these standards over a period of time to an IOS approved accreditation body to their satisfaction.

In general, an ISO 9000 certification for an educational institution assures “that it is well organized and that the outcomes of programmes and courses meet the intended goals and needs of the users; however, it does not necessarily guarantee that the content of these courses and programmes meet a particular educational standard”.4 These standards, therefore, need to be complemented by those pertaining to the content of education/training in question. Further, the approach is heavy on account of both the costs and the effort required.

Kaplan and Norton approach: Initially proposed in 1992 and meant for profit-oriented organizations, this approach was modified for use in educational institutions subsequently5. It focuses on the relation of an institution’s mission/objectives with its operations and achievements from four distinct viewpoints: financial, costumer related, institutional process related and innovations and learning related.

For example, in the case of innovation and learning related objectives like the quality and methods of teaching, levels of and collaboration in research, and learner quality, the corresponding assessment measures would be learner satisfaction, timely and planned learning activities, publications, papers in refereed journals, number and quality of research students registered, number and quality of seminars and conferences conducted, etc.

Barnett approach6 emphasizes the culture of quality rather than the management of quality, a process advocated emphatically in the approach called Total Quality Management (TQM), which advocates continuous improvement partly by influencing the mind-set of the people involved and partly by improving the institutional processes. In the latter, the factors focused on are the customers, efforts put in to achieve improvements continuously, staff development, teamwork with commitment and continuous monitoring/reviewing to effect improvements. The former, on the other hand, emphasizes facilitating the process of improvement through self-criticism within the institution leading to corrections and improvements continuously. With its focus on learners’ learning, it considers four activities namely i) curriculum and course design, ii) the teaching and learning transaction, iii) learner assessment and iv) staff development as the main means of building a culture of quality in education. There are other activities such as research, collaboration, etc. which contribute to such a culture, but the four activities mentioned constitute the bases of a quality culture.

4. What are the international practices?
International practices fall into two distinct categories: country specific practices and region specific efforts. The former pertain mainly to the leaders (mostly advanced countries) in the quality movement, while the latter are collective efforts for extending to other countries what the leaders have achieved in their respective places.

Among the first category worthy of a mention are the practices of Australia, the UK and USA.

In addition to the respective state and territorial accreditation bodies, Australia established the Australian Universities’ Quality Agency (AUQA) in 2000 as a national body to promote and safeguard quality in higher education. It does not provide any code to be followed by institutions desirous of accreditation, as each one of them is expected to have a quality assurance system that corresponds to its mission and goals. “In order to check its own policies, procedures and practices, to learn whether it is achieving its objectives, and to determine how to improve its performance, an institution or agency must have in place appropriate quantitative and qualitative measures and indicators.” 7 The institution desirous of the central audit prepares a self-review in accordance with its own system of performance evaluation and submits it to AUQA, where it is peer-reviewed before the audit team goes in for site visits. In essence, the audit report is a review of the internal quality assurance system of the institution concerned. It identifies ‘commendable practices’ and ‘areas for improvement’ for the institution to work on. Funding from the central, the state or the territorial government is related to how well the audited institution responds to the audit report.

The UK established their Quality Assurance Agency for Higher Education (QAAHE) in 1997. It has developed a code of practice in relation to ten themes considered crucial for maintaining academic standards in the dispensation of higher education in the country. The ten themes8 are: i) postgraduate research activity, ii) collaborative activities, iii) provision for differently able learners, iv) external examinations, v) learners’ complaints/appeals related to academic matters, vi) learner assessment, vii) approval, monitoring and review of programmes, viii) information, counselling and career guidance, ix) recruitment and admissions, and x) placement. The code provides principles of good practice in relation to each of the identified themes and the corresponding guidelines regarding how to follow these principles. Institutions are expected to report how best the set principles are followed. This self-assessment report is reviewed by QAAHE and followed by audit visits. It is the audit team that finally records ‘broad confidence’, ‘limited confidence’ or ‘no confidence’ in the performance of the institution concerned. Overall, QAAHE assures the taxpayer that the standards and the quality of higher education are safeguarded and improved continuously through subject reviews and institutional audits, as state funding is linked to institutional performance.

At the national level, USA established the Council on Higher Education Accreditation (CHEA) in 1997. Like the US Department of Education, CHEA is authorized to grant recognition to accreditation agencies. This recognition is reviewed periodically as it is not permanent. Institutions seeking accreditation approach an accreditation agency and submit their self-study report based on the standards criteria of the agency concerned. This report undergoes peer-review, which is followed by a site visit to assess the claims made in the report. Depending on the findings of the visiting team, the institution is either granted or denied accreditation. Accreditation, if granted, is not permanent and has to be renewed periodically.

Going beyond such national practices, the specimens of the second category of international practices comprise the activities of regional and international bodies. Some of the better known bodies are detailed below.
Established in 2003 in Hong Kong, Asia-Pacific Quality Network (APQN)9 is a network of quality assurance agencies that promotes cooperation among the quality assurance agencies in Asia and the Pacific region besides helping them in improving the quality of higher education in their respective areas. It follows a cafeteria approach in helping its members—helps in building quality assurance bodies in countries which have yet to establish such bodies, trains the quality assurance personnel in countries that have such bodies but have yet to make them fully functional, trains the trainers in field operations wherever needed and provides help in improving the functioning of the agencies that have been in place for sometime and are due for improvements and reforms.

European Association for Quality Assurance in Higher Education) is the new name (given in 2004) to the European Network for Quality Assurance in Higher Education (ENQA) created in 2000 as a follow up to Bologna declaration (1999), which works for convergence in quality assurance criteria and processes to be followed in the signatory countries. Among other things, the “Standards and Guidelines for Quality Assurance in the European Higher Education Area”10 prepared by ENQA and adopted in 2005 provide standards for both internal and external assessment of institutions and also for external quality assurance agencies. Without being prescriptive, these standards and guidelines are meant to provide a common basis for the institutions and the quality assurance agencies to build their systems on, in order that academic awards may be transferable across institutions/countries. In the process of external assessment, they advocate the use of institutions’ internal quality assurance activities and reports, deference for institutional autonomy and concern for learners’ needs and interests.

Established in 1991, the International Network for Quality Assurance Agencies in Higher Education (INQAAHE)11 shares information regarding quality assurance practices and the related thought with other networks and agencies with the objective of i) promoting good practices in quality assurance processes, ii) assisting research in the maintenance and management of quality assurance systems, iii) providing expertise in the creation of new quality assurance systems and agencies, iv) promoting collaboration among such agencies and thus learner mobility and credit transfer across institutions and borders, v) providing information about institutional standards across borders and the like. “INQAAHE Guidelines of Good Practices”, which were finalized and accepted in 2005, provide advice and guidelines to quality assurance agencies for them to ensure the quality of their operations while respecting their cultural variety that generates their theory and shapes their practice in their distinctive operational environments.
5. Mechanisms of quality assurance
As indicated in the details in Sections 3 and 4 above, a comprehensive quality assurance system comprises a few mechanisms emphasized and combined in various ways depending on the types of institution and country/region it is used in.

The basic mechanism lies in the institutional legislation that necessitates the institution concerned to work for quality assurance in its products, processes, services and outcomes.

Usually, such legislation finds its operational expression in the institutional quality assurance cell responsible for activating internal quality assurance operations. Individual academics, departments and faculties are made to prepare annual self-assessment reports, which include the assessment of individual subjects/disciplines and the programmes. This assessment has to be reflective and self-critical. Besides, it should identify weaknesses as well as strengths of the operations engaged in and also gauge the extent to which the set goals (institutional, departmental and/or programme related) have been achieved or not achieved. This assessment provides for continuous improvements in all the aspects of educational dispensation effected by the institution.

Beyond these major institutional mechanisms, there are state and/or regional accreditation bodies, directly or indirectly legitimized by the government(s) concerned, that receive for consideration the internal/self-assessment reports from institutions seeking accreditation. These reports are evaluated using the mechanism of peer-review commissioned by the accreditation body concerned. The review team analyses these reports critically on the basis of the criteria prescribed by the accreditation agency, the government concerned or the institution’s own criteria established for the purpose, as the case may be, and then conducts site visits to ascertain the claims made or implied in the self-assessment reports. The team may look into institutional policies, administrative and learner records, discuss issues with the institutional authorities, departmental heads, deans, academics, non-academic staff and/or students. Having satisfied themselves in all respects the team states its judgment about the course, programme, department or the institution, as the case may be. This judgment decides the kind of accreditation that is awarded to the institution concerned. As the award is not permanent, it has to be sought again and again periodically.

In addition to the above mechanisms which are in use in most of the cases, there are other mechanisms which are, in some cases, integrated with the ones outlined above or used independently in various combinations depending on specific institutional settings, such as autonomous institutions in India and the UK, market driven systems in the USA, the centre-oriented system in China and the like. Accordingly, some institutions go about analyzing purpose-built statistics for development and correctives, some work on feedback collected from learners, old graduates, employers and other stakeholders to look for areas that need improvements, while some evaluate their achievements/failures with the help of indicators or benchmarks for acceptable performance. Still others go by comparisons with best practices that are highlighted in the relevant literature from time to time or are identified for the purpose specifically.
In the present market driven economies and the context of mobile workforce, external assessment for purposes of quality assurance is rated highly, for it appeals the psyche of employers, taxpayers and other stakeholders alike, as it is perceived to be objective, unbiased and therefore, dependable. In places like India, however, where higher education institutions are generally self-regulating and autonomous, external assessment is frowned upon as an unwanted intervention against the traditional mechanisms like the academic boards/councils, departmental councils, senates, etc. In the U.K., however, the attitude of the academics had to change as funding got linked to institutional assessment/performance.

In general, the practice of combining internal self-assessment with external evaluation is emerging as a major trend the world over. The negative features in this trend are i) high costs to be borne by the institutions, ii) the immense and tiring paper work involved and iii) the accompanying perception that all said and done what difference does it make after all!

6. Developing course and programme evaluation strategies
We will begin with course materials, the most crucial factor that forms the foundation of any ODL programme. At the macro-level, the institution offering a course should ensure


  1. that there is a process, appropriately documented, for approving courses and that such approvals are outcomes of specific attention paid to the requirements of the chosen/available mode of delivery—print based, contemporary ICT enhanced, with live support, etc.;

  2. that such approvals are open processes with a provision for scrutiny by external agencies and/or individuals;

  3. that the academic/skills standards of the resulting awards are at par with those provided by other means and also in line with the benchmarks prescribed by regulatory bodies (if any) for such awards;

  4. that the stated objectives of the courses have conjoint relations with the intended learning outcomes;

  5. that the prescribed transactional design together with the scope of study materials are in a conjoint relationship with the criteria and types of learner assessment;

  6. that the study materials are designed in a way that they provide learning activities and opportunities that help the learner reach the academic level necessary for success; and

  7. that materials on offer are reviewed regularly for a) updating and/or revision, b) maintaining their relevance and c) bringing about the required changes in the related processes from time to time.

At the micro-level, study materials should have at least the following attributes:


  1. begin with a study guide (specific to the course concerned) that advises the learners as to how they may manage their time, work through the course and get the best out of it;


  2. have objectives stated in behavioural/operational terms for the learner to monitor his/her progress;

  3. have the content arranged in a sequence and presented using the media that optimizes learning;

  4. have ample advance organizers, access devices, learner activities, illustrations and explanations for the learner to progress through the course smoothly and achieve autonomy in managing his/her learning;

  5. provide assignments to regulate learners’ progress and to provide opportunities for interaction and feedback; and

  6. use the level of language that suits the target learners.

The overall strategy for course evaluation is to assess each and every course against the criteria/attributes listed above. As for programme evaluation, we need to build similar criteria at least with regard to the following three operational domains in relation to all the programmes on offer and evaluate them accordingly.


1. An Integrated institutional system should i) be in accordance with and guided by the appropriate national/provincial legislation specific to ODL operations at the various levels of education, ii) have an explicit statement of institutional goals/aims to work for, iii) have ODL specific strategies and management systems to effect such strategies in order to achieve the set aims, iv) have adequate infrastructure and technology that match the delivery systems in operation and iv) have quality assurance mechanisms to ensure quality operations and products.
2. A comprehensive programme design requires adequate provision for funding, approvals, reviews, infrastructure and human resources in order to bring together and manage:
  • design of the basic information pertaining to the programme concerned should be i) learner-friendly, ii) use comprehensible and appropriate levels of language and accessible means of delivery, and iii) provide comprehensive details regarding the features of the ODL system with special reference to the programme concerned and how to cope with it, the funding and study time required, the nature of the prescribed learning activities, the expected outcomes and assessment, pre-programme requirements and the post-programme possibilities;


  • curricular design should be realistic and purposeful and instructional design pedagogically sound;

  • programme delivery should i) match the available technology on the one hand and learners’ access to it on the other, ii) follow the planned and agreed upon strategies and modalities (including those about the media to be used), and iii) have contingency plans for all the possible risk factors.

  • learner support services should include pre-course learner guidance and preparation, on-course learner support including counselling and post-course guidance; and

  • learner assessment should entail strategies for formative assessment (to make it reliable and effective as a formative tool) and summative assessment to ensure maintenance of agreed upon standards and the assessment of learner competencies in accordance with the stated learning outcomes.


3. Quality assurance mechanisms to ensure internal as well as external assessment of the products, processes and learning outcomes pertaining to the programme concerned must be in place, in operation and in use for correctives in relation to the programme and also for enhancing the system as a whole.

[We will touch upon this sub-theme in the following Section again.]
7. How to measure quality in ODL?
Measuring quality in ODL depends on the approach, or the combination of approaches, to quality assurance adopted by an institution. Each approach prescribes a few criteria which serve in quantifying the levels of quality a course/programme/institution has reached. As it is not possible to detail all the approaches here, a simple illustration is presented below to explain the process.

Having agreed to what the aims and objectives of a particular course may be (this is done at the very outset when the curriculum is designed), we may identify the following five components for assessment so as to improve the quality of the course on the basis of the feedback received.



  1. Curriculum: Here we assess the purpose of the courses concerned, the content, the methods, the materials and the process of evaluation prescribed to achieve that purpose.

  2. Transactions: We assess the corresponding process of teaching and learning including instructional design, presentation, access devices and learner activities, assignments and the related feedback.

  3. Support services: We assess the process and content of the services provided at pre-course, on-course and post-course stages.

  4. Learners' achievement: Especially in open systems, learner achievement needs to be measured in terms of the progress made from the entry level of the learner and not from a prescribed entry qualification. The nature of the provision made to materialize this objective needs to be assessed besides the usual content and process of tests and examinations.

  5. Learning resources: We assess the overall infrastructure, libraries and labs, qualifications and experience of the staff, arrangements for the management of change and the quality of the intellectual contributions made by the faculty/institution.

This assessment provides enough data to measure the extent to which the aims and objectives of the course may have been achieved. Knowledge of this extent within a department/faculty, however, is not enough. It has to be used not only for improving the quality of the programme (of which the said course is a component), but also for purposes of socio-academic credibility/accountability ¾ usually expressed in terms of accreditation.

In order to measure the overall level of the quality achieved, achievements in all the above five factors may be graded according to the following scheme.

4 points--A Full contribution to the aims and objectives

3 points--B Substantial contribution to the aims and objectives

2 points--C Moderate contribution to the aims and objectives

1 point---D Inadequate contribution to the aims and objectives

Putting all the five factors together, the best performance will be represented by 5x4=20 points, while the bottom line (the lowest acceptable score) may be fixed at 15 points.
Courses falling below the minimum level of performance (i.e. 15 points) in the second consecutive year may attract withdrawal of accreditation and/or funds. This way accreditation functions as a means of total quality management as well as an expression of socio-academic accountability.

8. How can institutions develop their own QA systems?
It is clear that at present there is no single approach to, nor a single model of quality assurance that may be prescribed to be followed universally. This situation is not going to change in the immediate future, as an ever increasing number of new institutions is coming into the fold. Convergences, however, are being worked out and implemented through regional and international quality assurance bodies. A COL study Towards a Culture of Quality12 brings together case studies from twelve institutions across the Commonwealth countries describing their respective quality assurance concerns and practices. And as one would expect, no two are alike, but there are lessons to learn.

For example:

1. The case of Kyambogo University, Uganda, pertains to a teacher training programme. It is unique as, in absence of funds, adequate human resources and infrastructure, it banks on the significance of attitude and ethos in the process of quality assurance. For them quality in ODL is the function of care that can be given by an institution to the learner. To begin with, quality study materials and then, learner care materialized as self-less and intimate acts of support that provide immense learning satisfaction and ensure learner involvement are the two factors that ensure quality in ODL dispensation.

2. The case of the University of Guelph, Canada, details a comprehensive quality assurance programme with its foundations on the state-initiated quality assurance legislation that covers both the face-to-face and open distance education programmes. The said legislation is a statement of state’s expectations together with that of directions to be followed to fulfil those expectations. With its focus on learner needs, the university has, in turn, set up institutional mechanisms to prepare policies, implement them, monitor the implementation and outcomes and put correctives in place if needed. This case highlights commitment to quality assurance at all the levels of operation—national, provincial and institutional.

3. The Open Access College, South Australia, outlines a case of reengineering—a switch over from the high frequency radio technology to contemporary ICT applications for primary level distance learners. It is a case of quality assurance in project management/implementation with the objective of establishing an educational system that finds quality enhancement in ICT applications. Strategies to ensure quality comprised strategic planning focused on appropriateness of curriculum and methods; bench marking addressed school’s commitment, technology and educational transactions; risk analysis focused on possible detrimental events and the corresponding safeguards; internal reviews looked into on going practices to identify areas for correctives; independent evaluation assessed the overall progress of the project for extension; and lastly best practice workshops aimed at sensitizing teachers regarding quality concerns.

This diversity notwithstanding, the study concludes that convergence in approaches and methods could be achieved if institutions consider and work for quality assurance along three dimensions—core, systemic and resource related.
The core dimension pertains to those factors that constitute the foundation of quality assurance in ODL, whatever the context and whichever the generation of ODL in question. The said factors are course materials and instructional design, teaching-learning transaction (including learner evaluation practices), learner support services and systemic research
The systemic dimension pertains to those factors that constitute the system of ODL at the institutional as well as the national level. They are:

i) The State has to play its role in introducing, promoting and sustaining quality assurance regimes in ODL.

ii) Institutional leadership that motivates and fosters institutional commitment reflected in institutional objects and practices that promote and ensure quality products and processes.

iii) Innovative management that is flexible, pragmatic, democratic, hard on sick components and innovation friendly.

iv) Long- as well as short-term planning and the execution of plans should be meticulous.

v) Quality assurance mechanisms (in the form of quality assurance units or other central units that take care of ODL quality assurance affairs) that are pro-actively involved in institutional affairs.


The resource dimension pertains to factors like technology, technical and academic expertise, learning resources, physical infrastructure including ICT applications and cross-institutional collaboration.

9. Conclusion

The current practice of quality assurance in ODL presents two distinct strands, the first of which is represented by initiatives taken by individual institutions and the second by those that relate to national/regional/international quality assurance bodies in or the other way. The former have restricted objectives and operate within limited resources (intellectual, financial and technological), while the latter, very often not resource starved, have broader objectives and ambitions. These strands are likely to co-exist for quite some time to come. As the 5th generation ODL13 integrates pedagogy with technology and turns education into a global enterprise, convergence of the two strands, which is already discernible in various international initiatives, is likely to emerge certainly, though slowly.

Secondly, it is evident that in materializing QA in higher education, governments have a significant role to play—there has to be appropriate state legislation prescribing a quality framework which the higher education institutions must follow. This way, as funding gets linked to performance, institutional accountability will become a norm leading to a culture of quality.
Thirdly, the strides being made in ICT applications in education, especially the ODL systems, are redefining teaching-learning transactions. Consequently, there are two issues to be addressed: i) quality assurance in relation to ICT enhanced educational transactions and ii) the compelling need for institutional reengineering to transform the conventional institutional settings to ICT related settings. The former issue has received some attention14 here and there, but it has to be addressed in a way that provides solutions with (near) universal applications—such quality criteria and QA strategies have to be identified/found as may work for most of the situations/ institutions. For the latter issue, the international community has to get together to find ways of transforming conventional institutions in order to save them (especially those in the developing countries) from receding into an educational limbo or downright extinction in relation to the advances being made by institutions in the developed countries.

In this scenario, revisiting planning and management in ODL systems is not only desirable but also crucial. For QA, the management has not only to prompt and procure state legislation, but also develop an institutional philosophy together with a matching operational framework to effect internal assessment that dovetails with the state legislation and external assessment. Secondly, the financial systems need to be overhauled with a view to accommodate contemporary technology (equipment and other related expenditure), quality assurance mechanisms and operations, regular staff development programmes and the other concomitant expenses. Thirdly, planning will need to be cognizant of the change that is envisaged and taking place—planning has to be dynamic, proactive and change/reengineering oriented, not for age old routines and status-quo oriented. In the mean time, institutions need to learn from each other and improve on and even overhaul what they are doing currently to achieve quality for the benefit of the ultimate beneficiary—the learner.


References


1 Koul, B. N. (2005). Higher Distance/Virtual Education in the Anglophone Caribbean. pp. 72-74. Caracas: International Institute for Higher Education in Latin America and the Caribbean (IESALC-UNESCO). [Adapted from]


2 Butcher, B. and Hope, A. (2006). “Embracing Change: Quality Assurance at the Open University of Hong Kong” in B. N. Koul and A. Kanwar (Eds.) (2006). Towards a Culture of Quality, Vancouver: The Commonwealth of Learning.


3 NIST (2005). Education Criteria for Performance Excellence, Gaithersburg: NIST.

4 Van den Berghe, W. (1998). “Application of ISO 9000 Standards to education and training”, Vocational Training European Journal, No. 15, Sept.-Dec.

5 Cullen, J., Joyce, J., Hassal, T. and Broadbent, M. (2003). “Quality in higher education: from monitoring to management”, Quality Assurance in Education, 11 (1), 5-14.

6 Barnett, R. (1992). Improving Higher Education: Total Quality Care, Buckingham: SRHE & OU.

7 AUQA (2005). AUQA audit manual version 2.1, Melbourne: AUQA.

8 QAAHE (2003). A brief guide to quality assurance in UK higher education, Gloucester: QAAHE.

9 Stella, A. (2005). “Cooperation in quality assurance: Developments in Asia and the Pacific”. [See http://www.wes.org/ewenr/o0oct/practical.htm]


10 ENQA (2005). Standards and Guidelines for Quality Assurance in European Higher Education Area, Helsinki: ENQA.

11 See

12 Koul, B. N. and Kanwar, A. (Eds.) (2006). Towards a Culture of Quality, Vancouver: The Commonwealth of Learning.


13 Koul, B. N. (2006). “Prologue--Towards a Culture of Quality in Open Distance Learning: Current Practices” in B. N. Koul and A. Kanwar (Eds.) (2006). Towards a Culture of Quality, Vancouver: The Commonwealth of Learning.

14


 Leading the Learning Revolution: The e-Learning Policy of the Open University (2005), Internal Document, Milton Keynes: The Open University.






The database is protected by copyright ©hestories.info 2017
send message

    Main page