Sunday, May 9, 2021

Osmosis: We can simulate it, but do we really get it?

Computer simulations are useful for developing conceptual understanding of science ideas that are otherwise obscure. However, there are circumstances that simulations just raise more questions than what they answer. Osmosis is one of those deceptively simple phenomena that turn out to be quite challenging to understand, even if it can be simulated with reasonable clarity.
Figure 1. Osmotic pressure.
(Image from Wikipedia)


Osmosis is a process in which solvent molecules move -- without input of energy -- across a semipermeable membrane (which let only the solvent molecules pass) separating two solutions of different concentrations. A typical explanation is that the solvent molecules "want" to equalize the solute concentrations (or equivalently, the solvent concentrations) on the two sides. As a result, the solutions must build up persistent pressure that causes the liquid level in the left side of the U-shape tube shown in Figure 1 to rise remarkably higher against gravity.

Most people just walk away with this theory. But where exactly does the energy that elevates the liquid come from?

In the U-shape tube, no one exerts any force on the liquid in either side, while the force of gravity keeps the liquid level as low as possible. Somehow, by simply making the membrane in the middle partially permeable to only the solvent, some energy is extracted to do the heavy lifting. This process can be simulated using molecular dynamics as is shown in the YouTube video posted above. The simulation shows that, on average, the middle column eventually maintained a noticeably higher level. Removing the solute (the green particles) returned the liquid levels in the three columns to about the same.

In this simulation, the green particles and the blue particles have comparable chemical affinities, meaning that the blue particles do not particularly favor their kins around. Neither do the green particles. The white particles that represent the membrane molecules have very weak interactions with both blue and green particles.

Here is the link to the simulation (which is a Java applet). Happy New Year!

Wednesday, November 21, 2018

The SmartIR Gallery

Fig. 1: Thermal images in the Gallery
Gallery is a tool within the SmartIR app for users to manage their own thermal images, temperature graphs, and other data within the app. Unlike the default gallery in the operating system, SmartIR Gallery shows only things related to thermal imaging and provides image processing functions for analyzing thermal images and videos.

Fig. 2: Image processing in the Gallery
The gallery can be opened from the action bar of the SmartIR app. Once launched, it displays all the files that the user has collected through the app and stored on the device in a grid view (Figure 1). Tapping an image within the grid will display an enlarged view of the image in a popup window (by the way, the enlarged image in Figure 1 shows the liquid level of a propane tank revealed by the effect of cooling due to evaporation that feeds the gaseous fuel to the pipeline). Within the window, the user can swipe to the left or right, or tap the left or right arrow buttons, to browse the images.
Fig. 3: Temperature graphs in the Gallery

Based on OpenCV, SmartIR Gallery provides a number of image analysis tools for the user to quickly process thermal images. For instance, the user can blur, sharpen, invert, posterize, and adjust the brightness and contrast of an image (Figure 2). In the left image shown in Figure 2, sharpening a thermal image makes the view of the house more pronounced than the default view rendered by the FLIR ONE thermal camera. Inverting the thermal image, as shown in the middle image of Figure 2, may be a quick way to show how a heated house in the winter might look like in the opposite thermal condition such as an air-conditioned house in a hot summer day. In the right image shown in Figure 2, posterizing the thermal image creates an artistic view. More OpenCV tools will be added in the future to extend SmartIR's thermal vision capacity.

SmartIR Gallery also allows the user to filter the image files. For instance, the user can choose to show only the graphs, which are screenshot images taken from the temperature chart of SmartIR (Figure 3). Using the Share button, the user can easily send the displayed image to other apps such as an email app or a social network app.

Wednesday, September 19, 2018

A Small Step towards a Big Dream of Infrared Street View

The Infrared Street View is an award-winning project recognized by the U.S. Department of Energy and subsequently funded by the National Science Foundation. The idea is to create a thermal equivalent of Google's Street View that would serve as the starting point to develop a thermographic information system (i.e., the "Map of Temperature"). This is an ambitious goal that is normally only attainable through big investments from the Wall Street or by big companies like Google. However, as a single developer who doesn't have a lot of resources, I decided to give it a shot on my own. Being a non-Googler, I am counting on the citizen scientists out there to help me build the Infrared Street View. The first step is to create a free app so that they have a way to contribute.

My journey started in the mid of July 2018. In two months I have learned how to develop a powerful app from scratch. At the end, the Infrared Street View is coming into sight! This blog article shows some of the (imperfect but promising) results, as demonstrated in Figures 1 and 2.


Fig. 1: Panoramas in visible light and infrared light generated by SmartIR
This milestone is about developing the functionality in the SmartIR app for creating infrared panoramas so that anyone who has a smartphone with an infrared camera attachment such as FLIR ONE could produce a panoramic image and contribute it to the Infrared Street View, much like what you can do with Google's Street View app. Although this sounds easy at first glance, it has turned out to be quite challenging as we must work under the constraint of a very slow infrared thermal camera that can only take less than ten pictures per second. As our app targets average people who may be interested in science and technology, we must provide an easy-to-do user interface so that the majority of people can do the job without being overwhelmed. Lastly, to create virtual reality in infrared light, we must overcome the challenge of stitching the imperfect thermal images together to produce a seamless panoramic picture. Although they are many image stitchers out there, no one can be sure that they would be applicable to thermal images as those stitchers may have been optimized for only visible light images.

Fig. 2: Panoramas in visible light and infrared light (two coloring styles) generated by SmartIR
To support users to make 360° panoramas, SmartIR guides them to aim at the right angles so that the resulting image set can be used for stitching. These images should be evenly distributed in the azimuthal dimension and they should overlap considerably for the stitcher to have a clue about how to knit them together. SmartIR uses the on-board sensors of the smartphone to detect the orientation of the infrared camera. A number of circles are shown on the screen to indicate the orientations which the user should aim the cursor of the camera at. When the cursor is within the circle, an image is automatically taken and stored in a 360° scroller. By turning at a fixed position and aiming at the circles, the user can capture a series of images for stitching. The following YouTube videos show how this image collector works.



Although this is a very primitive prototype, it nonetheless represents the first concrete step towards the realization of the Infrared Street View. Down the road, stitchers for infrared thermal images still need significant improvements to truly achieve seamless effects similar to those for visible light images. Tremendous challenges for weaving the Map of Temperature still lie ahead. I will keep folks posted as I inch towards the goal and I am quite optimistic that I can get somewhere, even though I am not a Googler.

Sunday, September 9, 2018

Creating Augmented Reality Experiences in the Thermal World with SmartIR

Location-based augmented reality (AR) games such as Pokemon Go have become popular in recent years. What can we learn from them in order to make SmartIR into a fun science app? In the past week, I have been experimenting with AR in SmartIR using the location and orientation sensors of the smartphone. This article shows the limited progress I have achieved thus far. Although there are still tons of challenges ahead, AR appears to be a promising direction to explore further in the world of infrared thermal imaging.

According to Wikipedia, augmented reality is an interactive experience of a real-world environment whereby the objects in the real world are "augmented" by computed information. Typically, AR is implemented with a mobile device that has a camera and a display. In a broad sense, an image from a FLIR ONE thermal camera using the so-called MSX technology is automatically AR by default as it meshes a photo of the real world with false colors generated from the thermal radiation of the objects in the real world measured by the camera's microbolometer array. Similarly, the object-tracking work I have recently done for Project Snake Eyes can augment thermal images with information related to the recognized objects computed from their infrared radiation data, such as their silhouettes and their average temperatures.

Fig. 1: Very simple AR demos in SmartIR

But these are not the AR applications that I want to talk about in this article. The AR experience I hope to create is more similar to games like Pokemon Go, which is based on information geotagged by users and discovered by others. The involvement of users in creating AR content is critical to our app, as it aims to promote location-based observation, exploration, and sharing using thermal imaging around the world and aggregate large-scale thermal data for citizen science applications. Figure 1 shows the augmentation of the thermal image view with geotagged information for a house. If you are wondering about the usefulness of this feature other than its coolness (the so-what question), you can imagine tagging the weak points of a thermal envelope of a building during a home energy assessment. The following YouTube videos show how geotagging works in the current version of SmartIR and how users can later discover those geotags.




At this point of the development, I envision SmartIR to provide the following AR experiences. For users who have a thermal camera, a geotag can obviously guide them to find and observe something previously marked by others. What if you don't have a thermal camera but would still like to see how a place would look like through the lens of a thermal camera? In that case, a geotag allows you to see a plain thermal image or virtual reality (VR) view stored in the geotag, taken previously by someone else who had a thermal camera and overlaid in the direction of the current view on the screen of your phone. If VR is provided for the tagged site, the thermal image can also change when you turn your phone. Although nothing beats using a thermal camera to explore on your own, this is a "better than nothing" solution that mimics the experience of using one. In fact, this is the vision of our Infrared Street View project that aims at providing a thermal view of our world. In addition to the Web-based approach to exploring the Infrared Street View, AR provides a location-based approach that may be more intuitive and exciting.