Before Russell Kerschmann came along, the world through a microscope looked much the way people perceived the world at large to be before Columbus set sail: flat. Microscopes let us see an object’s surface and get some sense of its insides, but its true three-dimensional architecture remained a mystery. No one knew exactly how the two parts of Velcro attach, or precisely how the network of pores in a paper towel enable it to suck up water, or even how the three different layers that make up our skin interact. Then Kerschmann invented a new kind of microscope — and it’s revolutionizing the way scientists see.
Scientists at Procter and Gamble, for example, are harnessing Kerschmann’s technology to study how bone reacts to various drugs as it grows. Engineers at Sandia National Laboratories are employing the new scope to measure the tiny screws and gears they use to build microscopic robots. “We’d be up a creek without it,” says Christine Miller, a pediatric cardiologist at the University of Rochester. Miller is using the new imaging technique to investigate whether raising blood pressure around embryonic chick hearts can cause congenital heart defects.
“We’ve gone through a year and half of doing this with other technology and have gotten nothing.”
What began as a device that took up too much room on Kerschmann’s kitchen table is now the hub of a multimillion-dollar company, Resolution Sciences Corp., based in Corte Madera, California. The technology, called digital volumetric imaging, provides scientists with accurate three-dimensional images of almost anything they care to look at up close. The images can be rotated and viewed from any angle; they can also be opened up to reveal the sample’s interior.
Kerschmann’s images are so different from what came before that they are even teaching manufacturers and scientists things they never knew about their own products. When Kerschmann imaged Velcro, for example, he learned that the material is inefficiently constructed — most of the binding fibers never make contact.
Kerschmann’s images could help manufacturers engineer a form of Velcro that’s just as strong but costs less to make. “In our lab, I see things almost every day that no one’s ever seen before,” Kerschmann says. “And we’re just beginning to learn all the applications of the technology.”
People have been peering into microscopes for more than 150 years to get a close-up look at the natural world, and the devices have become increasingly sophisticated. But existing microscopes run into problems when magnifying samples that are larger than a few hairs’ diameter. They cannot capture the samples’ internal details three-dimensionally.
A medical pathologist, Kerschmann encountered this problem firsthand about 15 years ago, when he was working at the Wellman Laboratories of Photomedicine at Massachusetts General Hospital in Boston. The lab was developing a laser treatment for the inflated blood vessels that cause unsightly skin imperfections such as port wine stains and spider veins. Kerschmann’s task was to determine the specific laser energy and pulse length that would collapse the blood vessels without damaging collagen, the protein that gives skin its elasticity.
Doctors would take skin samples from volunteers, then Kerschmann would embed them in
wax and slice them into paper-thin sections. He then examined each section under a microscope to see where in the branching network of blood vessels the laser hit first. But Kerschmann was soon frustrated. The 2-D world he was seeing through existing scopes could not help him answer what was essentially a 3-D problem.
“I couldn’t get a good look at them,” he recounts. “You see the round cross sections through each of the the vessels, but they don’t tell you much about how the vessels interconnect and branch.”
It was possible, Kerschmann knew, to generate a composite 3-D computer image that would reflect all the individual sections of his sample. But he also knew that the cutting process so distorts and damages the slices that piecing them back together afterward would create a grossly inaccurate picture. It would be like slicing up a soft loaf of bread, then gluing the squashed slices back together and assuming you’d reconstructed the shape of the original loaf.
Kerschmann eventually figured out a way to get around the cutting problem. Instead of dicing up an object at the outset and placing it on hundreds of glass slides — the traditional method — he decided to alternate imaging and cutting the object in a sequential process that is ultimately more precise.
Kerschmann usually begins his process with a sample about the size of a pencil eraser. It is stained with fluorescent dyes and embedded in hard black plastic. He then clamps it to his microscope. The scope shoots laser light through the sample, which excites the fluorescent dyes. A digital camera captures the image, and then a blade slices off the sample’s outer layer. That layer, which has been damaged by the cutting, is discarded; the rest of the sample, however, remains intact. Next, the sample’s freshly exposed layer is imaged; then it too is removed. As this two-part process is repeated over and over, the sample slowly shrinks until there is nothing left of it. What is left, however, are the 1,000 or so images of each of its layers, stored on a computer. A software program compiles them into a single, luxuriously detailed, three-dimensional view of the original sample.
Adding to the precision of Kerschmann’s 3-D images is the fact that the plastic in which he encases his samples is black. That opacity means that when light from the microscope hits the sample, it penetrates only the outermost layer — the one that will be discarded on the next go-round. As a result, there is no visual overlap from one image to the next.
Once the process is complete, Resolution sends the client a disc containing the composite image and its many components. The company also provides a computer workstation equipped with special software that enables clients to view, analyze, and manipulate the data. The cost of the package is upward of $24,000, but clients say it’s well worthwhile.
“We’ve learned loads of stuff about our gears that we didn’t know before,” says Doug Chinn, a scientist at Sandia National Laboratories in Livermore, California. Chinn manufactures gears and screws just a few hundred microns long — about half the size of the tip of a ballpoint pen. They are parts for micromechanical devices such as tiny robots that scientists hope one day will be implanted into people to deliver drugs or protect transplanted cells. To design the parts, engineers rely on computer drawings. But until Resolution came along, Chinn says, his team was unable to determine whether or not the dimensions of the manufactured parts actually matched the drawings. “Resolution’s technology enables us to look at the sides of our parts,” he says. “No other technique gives us the same information.”
Meanwhile, at the Biological Imaging Center at the Beckman Institute at Caltech in Pasadena, Andy Ewald is trying to trace how cells move within frog embryos as they develop. But the embryos are opaque, so he can’t see inside them. Ewald gets around this problem by tagging specific cells with fluorescent dye, then asking Resolution to make images of the embryos at various stages of development — from the initial ball of cells to a fetus with recognizable organs. The data enable him to see inside the embryos and determine which bodily organs his tagged cells eventually turn into.
Useful as it is, Kerschmann’s technology isn’t exactly perfect. Some scientists complain that the microscope can’t handle anything larger than 8 millimeters on an edge — about the size of a sugar cube — and that the resolution of his images declines as samples approach the maximum size. The problem is hardware, Kerschmann says. Today’s computers can’t crunch his data quickly enough.
Still, Kerschmann’s scope is transforming the landscape of the very small. Scientists have been trying to make accurate microscopic 3-D images for more than 100 years, but until now they’ve had to fall back on artists’ renderings of what the object should look like. “We produce the real thing,” Kerschmann says.