An introduction to Qualitative Research Artificial Intelligence Technologies

By Huayi Huang

Cite as:

Huang, Huayi (16 March 2021). An introduction to Qualitative Research Artificial Intelligence Technologies. https://drkriukow.com/an-introduction-to-qualitative-research-artificial-intelligence-technologies/ Drkriukow.com

Introduction

To understand the relationship between Qualitative Research and Artificial Intelligence (AI) technologies, it helps to start with some key ideas in the development of AI within modern computer science. We follow this, by going a little into its relationship with current views on software supported data analysis within qualitative research.

Within computer science knowledge communities, AI has been historically subdivided into STRONG and WEAK variants (‘narrow’ AI in current terms). Both sets of idea still exert considerable influence on current thinking, in the general thrust of ongoing AI research programmes for example.

To ‘strong AI’ researchers, the performance of ‘normal’ or ‘expert’ humans – particular in their thinking and response to the natural world – is assumed to be the gold standard, in seeking to achieve the ideal of human level AI through their research programmes. The general aim for strong AI, is to remould ‘mechanised’ intelligence in the image of human beings, and to see the natural intelligence we exhibit in the day to day, as a source of insight for developing more human like traits, features, skills, etc. in our technologies and machines. As summarised nicely by Forbes (2019) recently, Artificial General Intelligence (AGI) in essence searches for machines that can successfully:

(1) generalise knowledge from one domain to another, in taking knowledge originating from one area and trying to apply it elsewhere,

(2) make plans for the future based on existing knowledge and experiences, and

(3) adapt to the environment as changes occur. 

All of this based on our very human capabilities of reasoning, puzzle solving, planning, and coming to a personal and collective common sense about the world.

Notably for researchers working on ‘weak AI’ technologies – the general strategy was instead to start with what exists in the current machine. The work of narrow AI then, was to argue for the advance and expansion of capabilities existing in the machine; at a societal level arguably valuing technological over human development in various (narrow) domains of performance.[1] In this light, cited successes of ‘narrow AI’: such as image and speech recognition, AI powered chatbots, self driving cars, and Optical Character Recognition, do not so much progress artificial agents and intelligence(s) along a one dimensional ‘ladder’ or ‘spectrum’; but instead function to replace multi-dimensional facets of the natural intelligence we’ve inherited in our evolution as human beings (eg. recognising faces and the content of spoken word, engaging in conversation, etc.).

Whilst not explicit in the stated aims of narrow/weak AI research programmes, the increasing integration of mechanical traits, features, skills, and capabilities to support the way we reason about the people and things around us sometimes attract interesting side effects – where the full potential of the human being (for self-actualization for example) may be remoulded in the image (and limitations) of the artificial intelligence technology. This can be seen in the common resignation and experience of needing to perform things a particular way (often a little awkward!), to accomplish some broader purpose in collaboration with some technological tool or system.

To qualitative researchers, it is uncontroversial that definitions (of visions, ideas, projects, situations, etc.) are important to explore and debate, since they often help or hinder us in accomplishing meaningful work in our personal or professional lives. As Maxwell highlights for example, a fundamental insight from symbolic interactionism is in the fact that people act in accordance with their definitions of situations, so fruitful explanations[2] are therefore dependent on understanding these definitions.

In seeking to understand the relationship between Qualitative Research and AI technologies then, lets us start by revisiting some key definitions from the history of computer science, and update these definitions of AI for the present. To help us in this, I’ve added a small commentary next to each set of quotes, to elaborate on their meaning for us as researchers in current times:

 

“From what I see, we do not yet live in a world where machines can simulate every aspect of human learning or intelligence, but even if we could enable such an ‘AI singularity’, what of our place as human beings in such a world?”

In search of ideas and technologies for strong AI

  1. “The study [of AI] is to proceed on the basis of the conjecture that every aspect of learning or any other feature of [human] intelligence can in principle be so precisely described that a machine can be made to simulate it” (McCarthy, Minsky, Rochester, & Shannon, 1955)

Commentary: What a grand definition! In this we see the idea of remoulding ‘mechanised’ intelligence in the image of human beings, in the search for machines simulating human intelligence. McCarthy et al’s ‘conjecture’ here highlights what is in reality a body of proposals being intensely elaborated on in the current decade: about a general relationship (of simulation-ability) between aspects of human learning or intelligence, and the capabilities of our mechanical counterparts. We can of course choose to accept or reject these propositions as an accurate description of the relationships we actually see in our own experiences, with learning, natural intelligence, and artificial simulations of this intelligence. From what I see, we do not yet live in a world where machines can simulate every aspect of human learning or intelligence, but even if we could enable such an ‘AI singularity’, what of our place as human beings in such a world?

  1. “The study of computations that make it possible to perceive, reason, and act” (Winston, 1992).

“The exciting new effort to make computers think…machines with minds, in the full and literal sense” (Haugeland, 1985).

Commentary: These definitions bring to mind the idea that as human beings we learn to perceive, reason, and act in an uncertain and changing world as children, and get a little better at doing this as we grow in age and worldly experience. Regardless of our self-image as ‘normal’ or ‘expert’ humans in performing our current family or social roles, artificial technologies arguably have a long way to go still, in trying to generalise across the breadth of role-adaptations and expertise available within our species. I guess the quest would be to produce a general purpose perceiver and reasoner, able to muddle through an uncertain and changing world in the way we can?

  1. “Artificial intelligence, broadly (and somewhat circularly) defined, is concerned with intelligent behavior in artifacts. [natural] Intelligent behavior, in turn, involves perception, reasoning, learning, communicating, and acting in complex environments” (Nilsson, 1998).

 

“[AI is] The field of research concerned with making machines do things that people consider [previously] to require intelligence” (Minsky, 1988).

 

“The study of how to make computers do things at which, at the moment, people are better” (Rich, Knight, & Nair, 2009).

Commentary:  These definitions showcase the ‘moving target’ of what really counts as ‘intelligent’ in developing an artificial intelligence technology (in a way familiar to us in working with evolving ideas from the field as qualitative researchers). Such ‘intelligence’ is likely to remain dynamic and adaptable, since the human capabilities that are referred to (human perception, reasoning, learning, etc.) are all adaptive open ended processes, reflecting our efforts to cope and thrive in our changing world. For skeptics of the critical role of evolving, loosely consensual ideas and constructs – in making sense of the world – I draw our attention to examples of evolving key ideas around ‘safety’ and ‘data’(Huang et al 2020) for example, both of which have long histories of enabling meaningful professional and personal work to be accomplished, despite their evolutionary and unstable nature.

“The general aim for strong AI is to remould ‘mechanised’ intelligence in the image of human beings, and to see the natural intelligence we exhibit in the day to day, as a source of insight for developing more human like traits, features, skills, etc. in our technologies and machines”

The state of narrow/weak AI technologies

In moving from the aspirations of strong AI to the empirical world we see, the following definitions seem descriptive, of existing successes in our search to replace and mechanise parts of our natural intelligence.

  1. The branch of computer science that is concerned with the automation of intelligent behavior” (Luger & Stubblefield, 1992).

 

“Artificial intelligence is the design and study of computer programs that behave intelligently” (Dean, Allen, & Aloimonos, 1995).

“[AI is] a field of study that encompasses computational techniques for performing tasks that apparently require intelligence when performed by humans” (Tanimoto, 1990).

Commentary: In these definitions, computer science theories and technologies are harnessed in order to reduce meaningful human input, participation, and judgement in ‘intelligent behaviours’ and ‘intelligent processes’, in light of a ‘moving target’ of what really counts as ‘intelligent behaviour’ (see Definition 3 above). In essence foregrounding the role of the computational and mechanical, and limiting human idiosyncrasy and creativity to those forms which ‘work’ in harmony with the current technology.

  1. “[AI is] the study of mental faculties through the use of computational models” (Charniak & McDermott, 1985).

Commentary: this definition is most illustrative, of the idea of trying to remould the full potential of the human being into the image of (and limitations) of an artificial intelligence technology. If the starting point is to understand ourselves through what is possible in the current technology (rather than the human being), a risk arises in the replacement of empathic forms of understanding of fellow members of our species ‘at source’. E.g. if we were to educate our next generation, sharing learning with others, and so on – only through a mechanical and computational understanding of the conscious experiences of ourselves and others.

 

So as actors then, in the collective search for technologies to enhance our mechanised or natural intelligence, perhaps it is important for us to engage in “the study of ideas that enable computers to be intelligent”? (Winston, 1984). This may lead to questions like:

  • What sort of ideas enable us as humans, to be ‘intelligent’?
  • What ways of ideation do we currently ‘do’ as intelligent learners?

The answers to these could become relevant to both STRONG and NARROW AI research programmes, depending on how this study of enabling ideas for mechanised or natural intelligence(s) pans out…

 

Thinking about ideas rather than technologies also then takes us from the material realm, into more abstract realms of theory and evidence-driven inference (or deductive inference-driven evidence generation like in the application of mathematical variables to document data from empirical domains). From a mixed methods research perspective, the ideas we infer could include the full range of meanings and implications supported by combining qualitative and quantitative evidence – created using both natural and artificial languages from our history.

 

 

“the remedy to the pressures facing us as qualitative researchers in this age of technology lies perhaps not in avoiding, but fully engaging to actively shape the course of technological developments for ourselves”

So, 3 working metaphors for Qualitative Research AI Technologies (QRAIT) then?

In considering 3 metaphors reflective of current views on qualitative data analysis software then, we might pay particular attention to the following, in thinking about emerging QRAITs:

  • The silver bullet, of QRAITs. Aspirations to mechanise parts of our human intelligence may hold promise, for reducing the amount of time required for qualitative analysis for example (for informing ever shortening changes in shifting policy and practice priorities). But we should not forget the need for such technologies to provide challenges to our pre-expectations and conceptualisations, as data analysts and within our teams. What is unlikely to go away in the short term, is in the need to root our working framing of phenomena, in (human) synthesis of trustworthy sources to latch onto, for developing our pre-expectations and conceptualisations (e.g. from existing theory). Despite the pragmatic need to perhaps reduce some of our meaningful human inputs, participation, and judgement in the qualitative data analysis process, in exploring some benefits from this ‘silver bullet’ of mechanised artificial inference.
  • The snake oil salesman seeking profit (from QRAITs) rather than remedy. Colleagues sometimes worry about the tendency of computer software to promote increased distance between data analysts and their data, an increased quantification/homogenisation of method, and postures of decontextualised data extraction focusing on mathematical structures (instead of contextualised data, and qualitative theory building so as to unlock a fruitful system of meaning for your research subject). But other colleagues do not see these as insurmountable risks in developing mechanised support – for enhancing the natural intelligence and judgement of qualitative researchers. In another words, the remedy to the pressures facing us as qualitative researchers in this age of technology lies perhaps not in avoiding, but fully engaging to actively shape the course of technological developments for ourselves. In pursuit of the shared interest in deeply understanding the meaningful and formative experiences, of human action in natural settings (Schwandt 2007, Huang 2020).
  • Putting the cart (of technology) with the horse (of qualitative analysts and methodology). In recognising the mutual influence qualitative research technologies, analysts, and methods have on each other as they interact in the research and learning process of a project, we may come to accept that the form and content of our qualitative work will sometimes be highly influenced by the strengths and limitations of the QRAITs we work with on a research problem. On the other hand, in light of broader interests in further mechanising parts of the multi-dimensional natural intelligence we each inherit, can we come to depend on our QRAITs not only for the more clerical tasks, but also come to appreciate them as co-researchers capable of operating at equally abstract or more meaningful units of thought?

This last point is surely an urgent question for us to engage with and explore, in nurturing a human spirit encouraging, of meaningful forms of human interaction and experiences in natural settings.

References

Charniak, E., & McDermott, D. (1985). Introduction to artificial intelligence. Reading, MA: AddisonWesley.

Dean, T., Allen, J., & Aloimonos, Y. (1995). Artificial intelligence: Theory and practice. Redwood City, CA: Benjamin Cummings.

Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, MA: MIT Press.

Huang, H., Jefferson, E., Gotink, M., Sinclair, C., Mercer, SW., & Guthrie, B. (Revision 1 under review) Collaborative improvement in Scottish GP clusters after the Quality and Outcomes Framework: a qualitative study. British Journal of General Practice.

Huang, H (2020). Lecture on Research Skills in Health Sciences: Understanding health services. November 2020, University of Edinburgh.

Luger, G. F., & Stubblefield, W. A. (1992). Artificial intelligence: Structures and strategies for complex problem solving (2nd ed.). Redwood City, CA: Benjamin Cummings.

McCarthy, J., Minsky, M. I., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved from http://www-formal.stanford.edu/jmc/ history/dartmouth/dartmouth.html ([Online; accessed 7 August 2014])

Minsky, M. (1988). The society of mind. New York, NY: Simon & Schuster, Inc.

Nilsson, N. J. (1998). Artificial intelligence: A new synthesis. San Francisco, CA: Morgan Kaufmann.

Rich, E., & Knight, K. (2009). Artificial intelligence (2nd ed.). New York, NY: McGraw-Hill.

Rich, E., Knight, K., & Nair, S. B. (2009). Artificial intelligence (3rd ed.). New Delhi: Tata McGraw-Hill.

Schwandt, T. A. (2007). The SAGE dictionary of qualitative inquiry (Vols. 1-0). Thousand Oaks, CA: SAGE Publications.

Tanimoto, S. L. (1990). The elements of artificial intelligence: Using Common Lisp. New York, NY: Computer Science Press.

Winston, P. H. (1984). Artificial intelligence (2nd ed.). Reading, MA: Addison-Wesley.

Winston, P. H. (1992). Artificial intelligence (3rd ed.). Reading, MA: Addison-Wesley.