Saturday, March 28, 2009

Smart molecules: next generation molecular visualization

A significant part of chemistry education is about teaching molecular structures. Before computers were widely available, many teachers used physical ball-and-stick models in the classroom. Using physical models has limitations--the variety of the molecules we can make is limited and the molecules cannot have too many atoms. When computers were powerful enough to support 3D video gaming, chemistry educators realized that they could be used to show any kind of molecules on the computer screen and there was essentially no limitation to the molecular structures that one wished to show. This method of computer-aided teaching is now commonly known as molecular visualization and is widely adopted by chemistry teachers in teaching about molecules.

There are now many molecular visualization tools freely available for education, such as Jmol, PyMOL, QuteMol, and Visual Molecular Dynamics, to name a few. All of these tools present wonderful graphics for showing molecules in 3D. When a student uses such a tool to learn a molecule's structure, he or she usually rotates the molecule to see it from different angles, zooms in and out to view different levels of details, and sometimes turns on different representations of the molecule to identify some recognizable patterns (such as a structural motif of a biomolecule like the famous DNA double helix and an electrostatic surface of a polar molecule like a water molecule).

It is our hope that through manipulating and observing these virtual molecules students will gain a lot of information about them and be able to apply the knowledge and learn to think like a chemist. There are, however, some reasonable doubts that this expected learning would spontaneously occur once students are given these tools. We observed in the classroom that there were a number of students who did not accomplish the learning goal even though they were fascinated by beautiful visualizations of molecules and played with them tirelessly. Most materials do provide instructions and background readings, but they seem to be not very effective. In the absence of an instructor nearby to explain to them what they are seeing on the screen, many students may leave the activity with no science learning accomplished.

The problem, in my opinion, partly lies in that most of these tools only present a passive learning experience. By passive I mean the molecule does not actually give any feedback to the student while he or she is interacting with it.

In the game world, a well-designed game presents an active experience to the user. While the user is playing a game, he constantly receives feedback from the system that attracts his attention and he is always facing a challenge that he must meet to accomplish his goals.

What can we learn from games? A lot. The first thing is: imagine the molecule can respond to the student's actions. For example, the student pilots a microscopic spaceship into the molecule with a mission to fight some toxic molecules (such as carbon monoxide) and he has to carefully avoid running into vicious traps from strongly polar sites that want to catch his ship. His ship is equipped with a laser gun that can break a chemical bond and destroy an evil molecule. During his journey, he will encounter a number of puzzles and challenges that he must solve to win the game. For instance, he must maneuver his ship through a narrow passage inside a molecule in order to get to an active reaction site.

By adding these additional functionalities to a molecular visualization tool to make the molecule actively interact with the user (in addition to just passively rendering a view), we may be able to increase the learning opportunities for students. We call this idea the Smart Molecules, which is based on our NSF-funded Molecular Rover Project.

A smart molecule can also be thought of as an interactive tutor built into a visualization tool. For example, depending on where the ship is, the molecule can act like a flight controller to instruct the student where to pilot the ship. It can give hints to the user while navigating. It can provide more munition or fuel once the supplies on the ship are running low. Science lessons can be embedded into the environment to be called up for help if needed.

The Smart Molecules represents a revolutionary step forward for the use of molecular visualization tools in education. It would be interesting to see if this technology will help students learn molecular structures better in the classroom. Stay tuned.

Thursday, March 19, 2009

Constructive science in the classroom


"Imagination is more important than knowledge." --- Albert Einstein
Science should be taught as a verb, not only as a noun. Doing science is a compelling and effective way to learn. It is through the process of exploration, creation, and invention that theories are applied, ideas are tested, and knowledge is synthesized and upgraded. This post showcases some interesting simulations recently created by students using the Molecular Workbench software and proves the feasibility of using the constructionist approach to teach science more effectively.

The image on the left is a screenshot of a student's simulation about how a ball that has a density lower than that of water keeps afloat in a bucket being filled up by rain. The dynamic simulation shows how buoyancy works with an amicable setup of clouds, rain, a ball, and a bucket. The simulation and the note made by the student (not presented here due to privacy issues) clearly show that the student had learned not only the modeling tool but also the science during the construction process, because the simulation produces the emergent behavior exactly intended and explained by the student.

The second image is a screenshot of a student's simulation about the gas laws. Designing something that violates a physics law is often very motivational to students. Students are inspired to use their creativity to come up with every imaginable possibility of violation. This student designed a subtle situation in which all atoms in one container move only in the direction perpendicular to the piston and atoms in another container move in both the perpendicular and the parallel direction with an initial setup that guarantees the equipartition of the kinetic energy in each direction. The simulation shows that the volume of the gas in the right container is approximately half of that of the gas in the left container. Is the Ideal Gas Law broken? We leave this question to you.

The third image is a screenshot of a simulation of a salt crystal and water a student created using the 3D Molecular Simulator. It shows that the student knew what a crystal structure is and how dissolving occurs. Considering the complexity of constructing a 3D model (over a 2D one), this student's work is quite impressive. The fourth image is a screenshot of a simulation of photosynthesis created by another student, which shows the student's understanding of this complex biological process and her efforts in modeling it.

A common challenge in using a general-purpose modeling tool in the classroom is that it may take students longer time than teachers are willing to spend in the classroom to make something pertinent to the learning goals. Tempted by the versatility of the tool, some students even tend to "drift" away from the learning goals. To help students focus on learning science, the Molecular Workbench software permits instructors to design scaffolded construction activities while engaging students to build simulations. This is a unique and important feature of the software that will facilitate the wide adoption of this pedagogy.

From the point of view of assessment, the richness of information expressed in these simulations has much to offer to research and evaluation about using computer simulations in the classroom. As a Chinese proverb says: "A picture is worth a thousand words," a simulation may be worth much more than a thousand words for the assessment of student learning. Ultimately, the most reliable and relevant assessment of educational simulations should use simulations themselves as the data sources. The only way to make this assessment work is to engage students to make their own simulations.

Wednesday, February 18, 2009

Making sense of quantum phenomena through simulations

"We have become quantum mechanics -- engineering and exploring the properties of quantum states. We're paving the way for the future nanotechnicians." --- Donald M. Eigler, IBM Fellow
Understanding how things work in the microscopic world is fundamentally important to science and engineering education in this century. The micro world is essentially operated by quantum mechanics, which is traditionally very difficult to learn--even for a physics student--because it is so unintuitive. Nevertheless, a large number of phenomena can only be understood with the quantum picture. Understanding these phenomena is becoming imperative. Many important technologies such as microelectronics and nanophotonics are built upon the science of electrons. These technologies are now spearheading new innovations that will lead to revolutionary changes in manufacturing, computing, communication, health care, and medicine.

This poses an interesting challenge to educators: how do we teach quantum reasoning to students without getting them bogged down in the complex field of quantum mechanics (and possibly the philosophical issues associated with its weird interpretations that are still at issues among some scientists and philosophers)? Is there a pathway for students to develop a quantum sense without resorting to the formalism of quantum mechanics?

Funded by the National Science Foundation, we are currently exploring effective ways through simulations to teach the science of electrons and the related technologies. Unlike many existing interactives on the web, our simulation program will provide students a tool with which they can familiarize themselves with the strange quantum world, without having to learn any equation at all. They will learn through playing with existing systems set up by curriculum developers or customized by their instructors (learning by interacting), or through designing new virtual devices of their own (learning by designing), such as a multigate field effect transistor, a quantum dot, a nanowire, or a molecular switch.

Unlike the common approach in which knowledge is told, these quantum simulations allow students to discover how tiny things behave through exploring the emergent behaviors of the microelectronic systems. For example, photon absorption and stimulate emission emerge from a quantum dynamics simulation of a bound electron and a laser. It also allows students to discover that it is the frequency of the laser, not the intensity, that determines these important processes. Another example is related to the core of chemistry. Our quantum dynamics simulator can be used to model how electron cloud changes when an atom is polarized (see the left part of the image on the left). When the user moves the nucleus closer, there is a dramatic change of the electron cloud--it now covers both nuclei. This causes a strong binding of two nuclei through the electron cloud, which is called the covalent bonding. When the user applies an external electric field (other than that of a point charge), it will also cause polarization. When the intensity of the field increases to a certain extent, the electron cloud will be stripped away from the nucleus--a phenomenon that we call ionization. It is fascinating to see that these fundamental concepts in chemistry just emerge from our quantum dynamics simulations! (see this page for more information.) These seemingly disparate concepts can be learned with a single, coherent picture of moving electron cloud in our simulations. The technology provides us a fresh opportunity to look at the reductionist approach, which advocates teaching fewer but more fundamental scientific principles and deriving other knowledge based on them (also see a recent article "How less can be more" by Bob Tinker).

In addition to the quantum dynamics simulator, we are also building a user interface that students can use to design systems such as a chemical reaction or a nanoscale circuit board.

These virtual experiments and virtual designs provide an accessible way to learning quantum phenomena. After all, a large part of the difficulty in understanding quantum phenomena stems from trying to explain microscopic things using our everyday experience, among some other philosophical issues that technical and engineering students may not care. If this obstacle is removed, understanding quantum phenomena should not be much more difficult than understanding water waves and optics. Computer models just streamline this learning process, as if students had a powerful, ultrafast microscope that can be used to look into the micro world. The visualization of how a nanoelectronic device works will help students understand the mechanism, just like a video that shows how air flows in a wind tunnel. Creating virtual devices and observing their properties will allow students to apply their knowledge and further enhance their learning, just like designing a stream table and then running water through it. Through this intimate interaction with a salient simulated micro world, students will learn more deeply than the traditional treatment through the standard teaching approach used in a textbook of solid state physics or chemistry, which either attempts to teach quantum concepts through daunting formalism or static illustrations, or completely avoids them.

It is important to point out that, although we do not try to teach the formalism of quantum mechanics, we use the theory to create the simulation tool. Our quantum dynamics simulation engine behind the user interface is based on numerically solving the time-dependent Schrodinger equation, and our computational method is based on cutting-edge research in computational physics (e.g., a speedy finite-difference time-domain method and novel boundary conditions). Because of this, our tool delivers accurate simulations that correctly depict the spatial distribution and the time evolution of electrons. This is very important, because it ensures the quality and scientific integrity of our simulations.
Image captions: 1) A quantum corral (click here to launch the model). 2) The probability wave just in the middle of a quantum tunneling event (click here to launch the model). 3) The electron clouds in the polarization of an atom and the formation of a covalent bond (click here to launch the model). 4) A nano star coupler that splits an input signal into three output signals (click here to launch the model).

Tuesday, November 11, 2008

Multicore computing: now and future

Moore's Law has been the golden rule in predicting the increasing of personal computing power for more than a decade, but change has arrived. A couple of years ago, Amdahl's Law became the governing law (without an inauguration). Multicore computing is now the critical driven force of computer performance. As of October 2008, the 7400-series "Dunnington" of Intel's Xeon offers hexa-core capacity.

The trend is, knowing it or not, computers will have more cores and your personal computer will rival the computing power of a supercomputer defined by the government only a decade ago. There are, of course, some debates going on among CPU designers whether to make a personal computer something like a shiny iPhone box, or to make it a little machine that can easily crack the current military code (http://www.eetimes.com/showArticle.jhtml?articleID=206105179). This is a critical decision that will affect the architecture of future computers: will the future generation of CPUs be a bundle of different special-purpose cores, or will they will be made of homogeneous, generic cores that can be assigned any task? (Or, maybe they should be the combination of the special-purpose cores such as GPUs and generic cores, as that seems to be the way our brains work.)

As a software developer, I clearly want to have more generic cores, as they are apparently my power base. One could suggest that a developer can try to access the power of things like a GPU (as folding@home seems to be doing well with), but the real questions are: (1) Do we really want to learn those lower-level libraries for each type of the special-purpose cores, in order to use them? (2) Do we really want our applications to be bound to special-purpose cores, which raises the cross-platform issues?

On the one hand, if a CPU comes with a lot of power that cannot be easily harnessed by an average programmer, then it will become only a few elite developers' privilege. On the other hand, if average programmers like me cannot come up with a convincing argument that we can develop killer applications if we are given more generic power, then the industry has the right reason to doubt that generic power will be useful to the vast majority of people out there.

So, can we come up with some cool ideas how multicore computing may be good to average people like Joe and Jane (not just offering dream machines to evil hackers --- for them to break into our bank accounts)?

I feel it is not easy to present a clear example of how I would use a 128-core CPU predicted to be available on a single notebook machine in less than 10 years (note that each core will surely run faster than the ones currently in your dual-core CPU --- just imagine the power we will have at our fingers). It is hard for me to imagine an application that will invoke 128 processes simultaneously at any time. But I recognize that I am probably in a mind block. The fact that I cannot see the big picture now simply does not mean it does not exist.

My background as a computational physicist gave me some hints of how things might develop. Parallel computing is essential in solving many scientific problems that involve huge calculations. Computational scientists are used to think in the language of parallelism. So a 128-core computer is nothing new for them. It is just a shared-memory supercomputer condensed to a laptop box.

Molecular dynamics is a "lab rat" for parallel computing research, because it is relatively simple to implement and study. Given the fact that the Molecular Workbench does molecular dynamics on a personal computer, it may be a wonderful candidate for us to make a highly relevant case.

The Molecular Workbench currently benefits from multicore computing in two ways. First, there exists embarrassingly parallel problems that automatically utilize this power. For example, one can do multiple simulations at the same time. If there are enough cores available, each will run on a core independently. This needs no extra work from the programmer, because a simulation runs on a thread that is assigned by the JVM and OS to a core. It is interesting to note that the model containers in the Molecular Workbench could provide a way to decompose a larger system, if the simulations are synchronized by communication among them through scripts.

Second, the graphics part of a simulation is handled in a different thread than the calculation thread. Therefore, a single simulation can have its molecular dynamics calculations running on one core, and the graphics running asynchronously on another. This is most helpful when the refreshing rate needs to be high to render the motion smoothly.

The Java molecular dynamics code of the Molecular Workbench, however, has not been parallelized. I have been playing with java.util.concurrent to parallelize it, but at this point, it seems the gain won't be measurable (if positive at all!) if we only have two cores, as is the case of most personal computers as of today. The overhead cost of task coordination may be higher than what it worths.

But suppose I had a 128-core CPU to back my pool of simulation threads, the story I am writing could be quite different.

Besides scientific simulations, 3D navigation environments such as SecondLife would also benefit enormously from multicore computing. The process of landscape download and construction can be easily decomposed into chunks and assigned to different cores.