All educational research and assessment are based on inference from evidence. Evidence is constructed from learner data. The quality of this construction is, therefore, fundamentally important. Many educational measurements have relied on eliciting, analyzing, and interpreting students' constructed responses to assessment questions. New types of data may engender new opportunities for improving the validity and reliability of educational measurements. In this series of articles, I will show how
graph theory can be applied to educational research.
The process of inquiry-based learning with an interactive computer model can be imagined as a trajectory of exploring in the problem space spanned by the user interface of the model. Students use various widgets to control different variables, observe the corresponding emergent behaviors, take some data, and then reason with the data to draw a conclusion. This sounds obvious. But exactly how do we capture, visualize, and analyze this process?
From the point of view of computational science, the learning space is enormous: If we have 10 controls in the user interface and each control has five inputs, there are potentially 100,000 different ways of interacting with the model, assuming that the user interacts with each control once and only once. To be able to tackle a problem of this magnitude, we can use some mathematics.
Graph theory is a trick that we are building into our process analytics. The publication of Leonhard Euler's
Seven Bridges of Königsberg in 1736 is commonly considered as the birth of graph theory.
|
Figure 1: A learning graph made of two subgraphs representing two ideas. |
In graph theory, a graph is a collection of vertices connected by edges:
G = (
V,
E).
When applied to learning, a vertex represents an indicator that may be related to certain competency of a student, which can be logged by software. An edge represents the transition from one indicator to another. We call a graph that represents a learning process as a
learning graph.
A learning graph is always a
digraph G = (
V,
A) -- namely, it always has directed edges or arrows -- because of the temporal nature of learning. Most likely, it is a
multigraph that has multiple directed edges between one or more than one pair of vertices (it is sometimes called a multidigraph) because the student often needs multiple transitions between indicators to learn their connections. A learning graph often has
loops, edges that connect back to the same vertex, because the student may perform multiple actions related to an indicator consecutively before making a transition. Figure 1 shows a learning graph that includes two sets of indicators, each for an idea.
|
Figure 2. The adjacency matrix of the graph in Figure 1. |
The size of a learning graph is defined as the number of its arrows, denoted by |
A(
G)|. The size approximately represents the number of actions the student takes during learning. The multiplicity of an arrow is the number of multiple arrows sharing the same vertices. The multiplicity of a graph is the maximum multiplicity of its arrows. The multiplicity represents the most frequent transition between two indicators in a learning process. The degree
dG(
v) of a vertex
v in a graph
G is the number of edges incident to
v, with loops being counted twice. A vertex of degree 0 is an isolated vertex. A vertex of degree 1 is a leaf. The degree of a vertex represents the times the action related to the corresponding indicator is performed.
The maximum degree Δ(G) of a graph G is the largest degree over all vertices; the minimum degree δ(G), the smallest.
The distance dG(u, v) between two vertices u and v in a graph G is the length of a shortest path between them. When u and v are identical, their distance is 0. When u and v are unreachable from each other, their distance is defined to be infinity ∞. The distance between two indicators may reveal how the related constructs are connected in the learning process.
|
Figure 3. A more crosscutting learning trajectory between two ideas. |
Two vertices
u and
v are called adjacent if an edge exists between them, denoted by
u ~
v. The square
adjacency matrix is a means of representing which vertices of a graph are adjacent to which other vertices. Figure 2 is the adjacency matrix of the graph in Figure 1, the trace (the sum of all the diagonal elements in the matrix) of which represents the number of loops in the graph. Having known the adjacency matrix, we can apply the
spectral graph theory to study the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of the matrix (because the adjacency matrix of a learning graph is a digraph, the eigenvalues are often complex numbers). For example, the eigenvalues of the adjacency matrix may be used to
reduce the dimensionality of the dataset into clusters.
|
Figure 4. The adjacency matrix of the graph in Figure 3. |
How might learning graphs be useful for analyzing student learning? Figure 3 gives an example that shows a different behavior of exploration between two ideas (such as heat and temperature or pressure and temperature). In this hypothetical case, the student has more transitions between two subgraphs that represent the two ideas and their indicator domains. This pattern can potentially result in better understanding of the connections between the ideas. The adjacency matrix shown in Figure 4 has different block structures than that shown in Figure 2: The blocks A-B and B-A are much sparser in Figure 2 than in Figure 4. The spectra of these two matrices may be quite different and could be used to characterize the knowledge integration process that fosters the linkage between the two ideas.
Go to
Part II.
No comments:
Post a Comment