I recently made a post – Making and doing – about the importance of moving the focus of radical nanotechnology away from the question of how artefacts are to be made, and towards a deeper consideration of how they will function. I concluded with the provocative slogan Matter is not digital. My provocation has been rewarded with detailed attempts to rebut my argument from both Chris Peterson, VP of the Foresight Institute, on Nanodot, and Chris Phoenix of the Center for Responsible Nanotechnology, on the CRNano blog. Here’s my response to some of the issues they raise.
First of all, on the basic importance of manufacturing:
Chris Peterson: Yes, but as has been repeatedly pointed out, we need better systems that make things in order to build better systems that do things. Manufacturing may be a boring word compared to energy, information, and medicine, but it is fundamental to all.
Manufacturing will always be important; things need to be made. My point is that by becoming so enamoured with one particular manufacturing technique, we run the risk of choosing materials to suit the manufacturing process rather than the function that we want our artefact to accomplish. To take a present-day example, injection moulding is a great manufacturing method. It’s fast, cheap, can make very complex parts with high dimensional fidelity. Of course it only works with thermoplastics; sometimes this is fine but everytime you eat with a plastic knife you expose yourself to the results of sub-optimal materials choice forced on you by the needs of a manufacturing process. Will MNT similarly limit the materials choices that you can make? I believe so.
Chris Peterson: But isn’t it the case that we already have ways to represent 3D molecular structures in code, including atom types and bonds?
Certainly we can represent structures virtually in code; the issue is whether we can output that code to form physical matter. For this we need some basic, low level machine code procedures from which complex algorthms can be built up. Such a procedure would look something like: depassivate point A on a surface. Pick up building block from resevoir B. Move it to point A. Carry out mechanosynthesis step to bond it to point A. Repassivate if necessary. Much of the debate between Chris Phoenix and Philip Moriarty concerned the constraints that surface physics put on the sorts of procedures you might use. In particular, note the importance of the idea of surface reconstructions. The absence of such reconstructions is one of the main reasons why hydrogen passivated diamond is by far the best candidate for a working material for mechanosynthesis. This begins to answer Chris Peterson’s next question…
Chris Peterson: How did we get into the position of needing to use only one material here?
…which is further answered by Chris Phoenix’s explanation of why matter can be treated with digital design principles, which focuses on the non-linear nature of covalent bonding:
Chris Phoenix: Forces between atoms as they bond are also nonlinear. As you push them together, they “snap” into position. That allows maintenance of mechanical precision: it’s not hard, in theory, for a molecular manufacturing system to make a product fully as precise as itself. So covalent bonds between atoms are analogous to transistors. Individual bonds correspond to the ones and zeros level.
So it looks like we’re having to restrict ourselves to covalently bonded solids. Goodbye to metals, ionic solids, molecular solids, macromolecular solids… it looks like we’re now stuck with choosing among the group 4 elements, the classical compound semiconductors and other compounds of elements in groups 3-6. Of these, diamond seems the best choice. But are we stuck with a single material? Chris Phoenix thinks not…
Chris Phoenix: By distinguishing between the nonlinear, precision-preserving level (transistors and bonding) and the level of programmable operations (assembly language and mechanosynthetic operations), it should be clear that the digital approach to mechanosynthesis is not a limitation, and in particular does not limit us to one material. But for convenience, an efficient system will probably produce only a few materials.
This analogy is flawed. In a microprocessor, all the transistors are the same. In a material, the bonds are not the same. This is obviously true if the material contains more than one atom, and even if the material only has one type of atom the bonds won’t be the same if the working surface has any non-trivial topography – hence the importance of steps and edges in surface chemistry. If the bonds don’t behave in the same way, a mechanosynthetic step which works with one bond won’t work with another, and your simple assembly language becomes a rapidly proliferating babel of different operations all of which need to be individually optimised.
Chris Phoenix: For nanoscale operations like binding arbitrary molecules, it remains to be seen how difficult it will be to achieve near-universal competence.
I completely agree with this. A classic target for advanced nanomedicine would be to have a surface which resisted non-specific binding of macromolecules, but recognised one specific molecular target and produced a response on binding. I find it difficult to see how you would do this with a covalently bonded solid.
Chris Phoenix: But most products that we use today do not involve engineered nanoscale operations.
This seems an extraordinary retreat. Nanotechnology isn’t going to make an impact by allowing us to reproduce the products we have today at lower cost; it’s going to need to allow us to make products with a functionality that is now unattainable. These products – and I’m thinking particularly of applications to nanomedicine and to information and communication technologies – will necessarily involve engineered nanoscale operations.
Chris Phoenix: For example, a parameterized nanoscale truss design could produce structures which on larger scales had a vast range of strength, elasticity, and energy dissipation. A nanoscale digital switch could be used to build any circuit, and when combined with an actuator and a power source, could emulate a wide range of deformable structures.
Yes, I agree with this in principle. But we’re coming back to mechanical properties – structural materials, not functional ones. The structural materials we generally use now – wood, steel, brick and concrete – have long since been surpassed by other materials with much superior properties, but we still go on using them. Why? They’re good enough, and the price is right. New structural materials aren’t going to change the world.
Chris Phoenix: A few designs for photon handling, sensing (much of which can be implemented with mechanics), and so on should be enough to build almost any reasonable macro-scale product we can design.
Well, I’m not sure I can share this breezy confidence. How is sensing going to be implemented by mechanics? We’ve already conceded that the molecular recognition events that the most sensitive nanoscale sensing operations depend on are going to be difficult or impossible to implement in covalently bonded systems. Designing band-structures – which we need to do to control light/matter interactions – isn’t an issue of ordinary mechanics, but of many-body quantum theory.
The idea of being able to manipulate atoms in the same way as we manipulate bits is seductive, but ultimately it’s going to prove very limiting. To get the most out of nanotechnology, we’ll need to embrace the complexities of real condensed matter, both hard and soft.