I have been involved in a lot of interest and discussion lately about
1. How can I frame good research questions in Educational Technology?
and then
2. How can I match them with an appropriate methodology?
There are a lot of sources you could use to answer these questions (and I am interested in what other people know in response to these two questions too, and what are the best resources you can suggest to others).
As a starting point for the discussion, here are two short articles which helped clarify some things in my mind as I was preparing to conduct my own dissertation research.
2. A Model of Technology Capable of Generating Research Questions
For any who are interested in participating in an online discussion about these questions, please post your questions in a comment response to this blog post. Also please post any insights or questions in response to these two articles.
I have also invited the author of these two articles (Dr. Andy Gibbons – very well known in IDT) to be available to help read and respond to some of the questions/comments that you post, according to his availability. Potentially we will also have a chance to do a live online meeting with him at some point within the next couple months (I will post more as I know it).
This discussion is intended to help any who are in the process of deciding what and how to do their research. I have a feeling that we can all learn a lot from this discussion.
I’d like to be part of this discussion, but there seem to be underlying assumptions that I’m missing out on. I admit that I’ve only skimmed the articles (I need to print them out to read them properly). However, with my own background in Computer Science, I am accustomed to research questions such as:
Can a certain thing be accomplished?
Does a certain approach accomplish it better (by some stated metric)? Under what conditions?
What relationships exist between a specific set of variables and a set of outcomes (e.g., if you change a certain parameter, is the approach faster at the cost of more computer memory)?
There is also the occasional philosophical question: “What if a certain thing is true?” but that’s not as common.
There are a few common ways these questions are answered too:
Existential proof — we did it, and therefore, it can be done; here’s how. Such a paper usually describes the algorithm that accomplishes a specific task and gives some additional properties.
Empirical proof — many problems in computer science have “benchmark problems” that are used to test incremental improvements. The researcher throws his or her latest approach at the benchmark and compares it with previous approaches for statistical differences.
Theoretical proof — the researcher presents a lot of math with confusing notations. After sufficient symbolic gymnastics, the paper arrives at an insightful conclusion .
Predictive model — the researcher takes empirical data and attempts to explain why a complex relationship exists by proposing a model. The model predicts other relationships that can also be tested to validate the model. This is hard. Models can be tough to come up with, and regardless of how often they predict correctly, all we can say is that they still look good.
Naturally, my little impromptu lists aren’t exhaustive, but I think they give the general sense of how we do things. If someone shows that an algorithm accomplishes a given task, it’s tough to argue with that. If one approach is statistically better (in some way) than another approach, as long as the mathematical assumptions underlying the analysis hold, then I accept that there is a good chance (one minus alpha) that the approach really is better. I take these things for granted.
So… what blinders am I wearing? How are things different outside of my CS bubble? One could say that I take a positivist perspective of things, but constructivism seems crazy in my domain. I definitely need more information so that I can speak the same language as you. When is methodology called into question?
I’ve been trying to take a work-free vacation, but this topic is so dear to me that I just can’t hold back! Anyway, you can find my thoughts on the subject in two chapters I have recently written on the topic. The link is below.
http://cs.joensuu.fi/~jrandolp/articles/methods_choice.pdf
Also, for students in my Scholary Communication class, there is an article by Creswell “Research questions, objectives, and hypotheses” in Readings on Writing that might also help you clarify these issues.
I’m of the opinion that the research question is at the crux of methods choice, but there are also methodological factors that come into play before and after the specification of the research question. I try to explain it in my chapter.
Thanks for starting should a great discussion topic.
Justus
Justus,
I am very impressed after skimming your chapters, and I intend to read them carefully. I am especially impressed by your focus on the nature of the question as the determiner of the research methodology because it leads the researcher to consider methods outside the standard F-test mold.
I am taking into account Joseph’s comment also, as a computer scientist, when he remarked that some questions are of the nature “Can we do this?”, or “How can we do this?” Such questions seem to me to be asking something fundamentally different from the type of questions we think of as being “scientific”. I classify these questions along the lines of Herbert Simon’s suggestion (in Sciences of the Artificial) that there is a technological, or design, mode of thinking that produces a whole different set of research questions. To my thinking, these are very much like the questions Joseph identifies that are associated with computer science, which is heavily oriented toward design.
I would appreciate your response to a proposition that Vic Bunderson and I make in the “Explore, Explain, Design” article: that there are research questions that are not, strictly speaking, an outgrowth of a scientific mode of thought. That is, they are not questions looking for a single natural explanation. Instead, some questions seek to identify and verify generative principles that can be used to generate alternative designs and support the making of designs.
I like very much, for instance, the type of question you recommend that seeks “a need for information about what factors moderate the effectiveness of certain
kinds of technological interventions”. Sometimes our research demonstrates that a relationship exists without eliciting data that would aid a designer in designing interventions and intervention patterns. This seems to me to be a much-needed category of research that is too often neglected.
Does this view seem useful to you?
Andy
Dear Andy, Thank you for your interesting response. I plan to read through your articles, get my thoughts together, and then make a response in the near future.
Far from being ‘philosophical’, in the sense of ‘metaphysical’ or ‘speculative’, the question “What if a certain thing is true?” is one of the cornerstones of science. (Though not so much of CS.)
When you have a new hypothesis, e.g. “the matter is made of atoms”, you go on designing experiments which will eventually support or undermine your hypothesis. You do that by asking “What if matter was made of atoms?”. That is, you ask, “what are the logical consequences of the atomic theory?” and go on testing those logical consequences.
Hello Everyone,
Sorry for the slow response.
About “How can . . .“ questions:
I think that some of the awkwardness of educational technology research questions stems from that fact that our field is a multidisciplinary discipline whose subdisciplines also have subdisciplines. For example, computer science is one of the many disciplines that informs our field. Computer science itself has its own subdisciplines and traditions: an engineering tradition, a mathematical tradition, and an empirical tradition. (Matti’s thesis at ftp://cs.joensuu.fi/pub/Dissertations/tedre.pdf has a great section on this.) The result is that sometimes we get some awkward artifacts from combining these incommensurable traditions. For example, the “How can . . .” question is awkward because I think that it is really an engineering proposition (e.g., I’m going to create a thing that meets a certain set of specifications) couched in the hypothesis-testing tradition of empirical research. (Also, you can’t actually answer a how can question because you would, technically, have to list an infinite number of ways that the task can be accomplished). Often, we end up forcing an engineering proposition into a hypothesis-testing question, and ending up with a hybrid that doesn’t work in either tradition.
Taking a Wittgenstinian tack here, I think that the way to get out of this situation is to take a step back and to try to untangle and demystify the language and traditions of our field. DO we give more value to a hypothesis-testing question than an engineering task? If we do, why do we and should we? My guess is that the best way for our field to advance is to take a step backward and untangle the knot that we have created. The awkwardness goes away when we separate the act of engineering with the act of hypothesis-testing. I think that perhaps it is best to treat them as complementary acts instead of combining them into one act. For example, I don’t think that we should consider design-based research to be doing design and research at the same time and in the same sense. That is, they don’t share identities. Also, I think that it would be wise to only use the term “research question” when we mean to invoke the hypothesis-testing paradigm.
Now on to whether
“there are research questions that are not, strictly speaking, an outgrowth of a scientific mode of thought. That is, they are not questions looking for a single natural explanation. Instead, some questions seek to identify and verify generative principles that can be used to generate alternative designs and support the making of designs.”
I certainly think that we, instructional designers and evaluators, do these kinds of acts that you mentioned above. It’s probably at the core of what we do. However, I think that the intellectual quandary here might be created by calling these things “research questions” and invoking the hypothesis-testing tradition and combining that act with these other acts that are not hypothesis-testing. Untangling this quote above, there is an element of hypothesis-testing (verifying principles), exploring (identifying and generating), and designing (creating). So, in summary, I think that, no, there are not research questions that are not an outgrowth of a scientific mode of thought because by using the term “research question” we automatically invoke an outgrowth of a scientific mode of thought. But, I agree that there are many modes of scientific inquiry or action that we alternately engage in, such as looking for a single natural explanation, identifying principles, verifying principles, generating alternative designs, and supporting the making of designs. In essence, we Explore, Explain, Design (and describe, and experiment, and correlate, and . . . ).
Ok, I’ll leave this dialogue with a closing comment from one of my favorite scientists, Gene Glass, the father of meta-analysis:
“. . . We need to stop thinking of ourselves as scientists testing grand theories, and face the fact that we are technicians collecting and collating information, often in quantitative forms. Paul Meehl (1967; 1978) dispelled once and for all the misconception that we in, what he called, the “soft social sciences” are testing theories in any way even remotely resembling how theory focuses and advances research in the hard sciences. Indeed, the mistaken notion that we are theory driven has, in Meehl’s opinion, led us into a worthless pro forma ritual of testing and rejecting statistical hypotheses that are a priori known to be 99% false before they are tested.”
(see this quote in context at: http://glass.ed.asu.edu/gene/papers/meta25.html)
Is Gene Glass right?
Justus