Will molecular electronics save Moore’s Law?

Mark Reed, from Yale, was another speaker at a meeting I was at in New Jersey last week. He gave a great talk about the promise and achievement of molecular electronics which I thought was both eloquent and well-judged.

The context for the talk is provided by the question marks hanging over Moore’s law, the well-known observation that the number of transistors per integrated circuit, and thus available computer power, has grown exponentially since 1965. There are strong indications that we are approaching the time when this dramatic increase, which has done so much to shape the way the world’s economy has changed recently, is coming to an end.

The semiconductor industry is approaching a “red brick wall”. This phrase comes from the International Technology Roadmap for Semiconductors, an industry consensus document which sets out the technical barriers that need to overcome in order to maintain the projected growth in computer power. In the technical tables, cells which describe technical problems with no known solution are coloured red, and by 2007-8 these red cells proliferate to the point of becoming continuous – hence the red brick wall.

A more graphic illustration of the problems the industry faces was provided in a plot that Reed showed of surface power density as a function of time. This rather entertaining plot showed that current devices have long surpassed the areal power density of a hot-plate, are not far away from the values for a nuclear reactor, and somewhere around the middle of the next decade will surpass the surface of the sun. Now I find the warm glow from my Powerbook quite comforting on my lap but carrying a small star around with me is going to prove limiting.

So the idea that molecular electronics might help overcome these difficulties is quite compelling. In this approach, individual molecules are used as the components of integrated circuits, as transistors or diodes, for example. This provides the ultimate in miniaturisation.

The good news is that (despite the Sch??n debacle) there are some exciting and solid results in the field. The simplest devices, like diodes, have two terminals, and there is no doubt that single molecule two-terminal devices have been convincingly demonstrated in the lab. Three terminal devices, like transistors, seem to be vital to make useful integrated circuits, though, and there progress has been slower. It’s difficult enough to wire up two connections to a single molecule, but gluing a third one on is even harder. This feat has been achieved for carbon nanotubes.

What’s the downside? The carbon nanotube transistors have a nasty and underpublicised secret – the connections between the nanotubes and the electrodes are not, in the jargon, Ohmic – that means that electrons have to be given an extra push to get them from the electrode into the nanotube. This makes it difficult to scale them down to the small sizes that would be needed to make them competitive with silicon. And the single molecule devices have the nasty feature that every one is different. Conventional microelectronics works because every one of the tens of millions of transistors on something like a Pentium are absolutely identical. If the characteristics of each of the components were to randomly vary the whole way we currently do computing would need to be rethought.

So it’s clear to me that molecular electronics remains a fascinating and potentially valuable research field, but it’s not going to deliver results in time to prevent a slow-down in the growth of computer power that’s going to begin in earnest towards the end of this decade. That’s going to have dramatic and far-reaching effects on the world economy, and it’s coming quite soon.

Training the nanotechnologists of the future

It’s that time of year when academic corridors are brightened by the influx of students, new and returning. I’m particularly pleased to see here at Sheffield the new intake for the Masters course in Nanoscale Science and Technology that we run jointly with the University of Leeds.

We’ve got 29 students starting this year; it’s the fourth year that the course has been running and over that time we’ve seen a steady growth in demand. I hope that reflects an appreciation of our approach to teaching the subject.

My view is that to work effectively in nanotechnology you need two things, First comes the in depth knowledge and problem-solving ability you get from studying a traditional discpline, whether that’s a pure science, like physics and chemistry, or an applied science, like materials science, chemical engineering or electrical engineering. But then you need to learn the languages of many other disciplines, because no physicist or chemist, no matter how talented at their own subject, will be able to make much of a contribution in this area unless they are able to collaborate effectively with people with very different sets of skills. That’s why to teach our course we’ve assembled a team from many different departments and backgrounds; physicists, chemists, materials scientists, electrical engineers and molecular biologists are all represented.

Of course, the nature of nanotechnology is such that there’s no universally accepted curriculum, no huge textbook of the kind that beginning physicists and chemists are used to. The speed of development of the subject is such that we’ve got to make much more use of the primary research literature than one would for, say, a Masters course in physics. And because nanotechnology should be about practise and commercialisation as well as theory we also refer to the patent literature, something that’s, I think, pretty uncommon in academia.

In terms of choice of subjects, we’re trying to find a balance between the hard nanotechnology of lithography and molecular beam epitaxy and the soft nanotechnology of self-assembly and bionanotechnology. The book of the course, “Nanoscale Science and Technology”, edited by my colleagues Rob Kelsall, Ian Hamley and Mark Geoghegan, will be published in January next year.

What is this thing called nanotechnology? Part 1. The Nano-scale.

Nanotechnology, of course, isn’t a single thing at all. That’s why debates about the subject often descend into mutual incomprehension, as different people use the same word to different things, whether it’s business types talking about fabric treatments, scientists talking about new microscopes, or posthumanists and futurists talking about universal assemblers. I’ve attempted to break the term up a little and separate out the different meanings of the word. I’ll soon put these nanotechology definitions on my website, but I’m going to try out the draft definitions here first. First, the all-important issue of scale.

Nanotechnologies get their name from a unit of length, the nanometer. A nanometer is one billionth of a metre, but let’s try to put this in context. We could call our everyday world the macroscale. This is the world in which we can manipulate things with our bare hands, and in rough terms it covers about a factor of a thousand. The biggest things I can move about are about half a meter big (if they’re not too dense), and my clumsy fingers can’t do very much with things smaller than half a millimeter.

We’ve long had the tools to extend the range of human abilities to manipulate matter on smaller scales than this. Most important is the light microscope, which has opened up a new realm of matter – the microscale. Like the macroscale, this also embraces roughly another factor of a thousand in length scales. At the upper end, objects half a millimeter or so in size provide the link with the macroscale; still visible to the naked eye, handling them becomes much more convenient with the help of a simple microscope or even a magnifying glass. At the lower end, the wavelength of light itself, around half a micrometer, gives a lower limit on the size of objects which can be discriminated even with the most sophisticated laboratory light microscope.

Below the microscale is the nanoscale. If we take as the upper limit of the nanoscale the half-micron or so that represents the smallest object that can be resolved in a light microscope, then another factor of one thousand takes us to half a nanometer. This is a very natural lower limit for the nanoscale, because it is a typical size for a small molecule. The nanoscale domain, then, in which nanotechnology operates, is one in which individual molecules are the building blocks of useful structures and devices.

These definitions are by the nature arbitrary, and it’s not worth spending a lot of time debating precise limits on length scales. Some definitions – the US National Nanotechnology Initiative provides one example – uses a smaller upper limit of 100 nm. There isn’t really any fundamental reason for choosing this number over any other one, except that this definition carries the authority of President Clinton, who of course is famous for the precision of his use of language. Some other definitions attempt to attach some more precise physical significance to this upper length limit on nanotechnology, by appealing to some length at which finite size effects, usually of quantum origin, become important. This is superficially appealing but unattractive on closer examination, because the relevant length-scale on which these finite size effects become important differs substantially according to the phenomenon being looked at. And this line of reasoning leads to an absurd, but commonly held view, that the nanoscale is simply the length-scale on which quantum effects become important. This is a very unhelpful definition when one thinks about it for longer than a second or two; there are plenty of macroscopic phenomena that you can’t understand without invoking quantum mechanics. Magnetism and the electronic behaviour of semiconductors are two everyday examples. And equally, many interesting nanoscale phenomena, notably virtually all of cell biology, don’t really involve quantum mechanical effects in any direct way.

So I’m going to stick to these twin definitions – it’s the nanoscale if it’s too small to resolve in an ordinary light microscope, and if it’s bigger than your typical small molecule.

None but the brave deserve the (nano)fair

I’m in St Gallen, Switzerland, in the unfamiliar environment (for an academic) of a nanotechnology trade fair. The commercialisation arm of our polymer research activities in the University of Sheffield, the Polymer Centre, is one of the 14 UK companies and organisations that are exhibiting as part of the official UK government stall at Nanofair 2004.

It’s interesting to see who’s exhibiting. The majority of exhibitors are equipment manufacturers, which very much supports one conventional wisdom about nanotechology as a business, which is that the first people to make money from it will be the suppliers of the tools of the trade. Perhaps the second category are those countries and regions who are trying to promote themselves as desirable locations for businesses to relocate to. Companies that actually have nanotechnology products for actual consumer markets are very much in the minority, though there are certainly a few interesting ones there.

Alternative photovoltaics (dye-sensitised and/or polymer-based) are making a strong showing, helped by a lecture from Alan Heeger, largely about Konarka. This must be one of the major areas where incremental nanotechnology has the potential to make a disruptive change to the economy. A less predictable, but fascinating stand, for me, was from a Swiss plastics injection moulding company called Weidmann. Injection moulding is the familiar (and very cheap) way in which many plastic items, like the little plastic toys that come in cereal boxes, are made. Weidmann are demonstrating an injection moulded part in an ordinary commodity polymer with a controlled surface topography at the level of 5-10 nanometers. To me it is stunning that such a cheap and common processing technology can be adapted (certainly with some very clever engineering) to produce nanostructured parts in this way. Early applications will be to parts with optical effects like holograms directly printed in, and more immediately microfluidic reactors for diagnostics and testing.

The UK has a big presence here, and our stand has some very interesting exhibitors on it. I’ll single out Nanomagnetics which uses a naturally occurring protein to template the manufacture of magnetic nanoparticles with very precisely controlled sizes. These nanoparticles are then used either for high density data storage applications or for water purification, as removable forward osmosis agents. This is a great application of exploiting biological nanotechnology that very much is in accord with the philosophy outlined in my book Soft Machines; I should declare an interest in that I’ve just joined the scientific advisory board of this company.

The UK government is certainly working hard to promote the interests of its nascent nanotechnology industry. Our stall is full of well-dressed and suave diplomats and civil servants. However, one of the small business exhibitors was muttering a little that if only they were willing to spend the money directly supporting the companies with no-strings contracts, as the US government is doing with companies like Nanosys, then maybe the UK’s prospects would be even brighter.

If biology is so smart, how come it never invented the mobile phone/iPod/Ford Fiesta?

Chris Phoenix, over on the CRN blog, in reply to a comment of mine, asked an interesting question that I replied at such length to that I feel moved to recycle it here. His question was, given that graphite is a very strong material, and given that graphite sheets of more than 200 carbon atoms have been synthesized with wet chemistry, why is it that life never discovered graphite? From this he questioned the degree to which biology could be claimed to have found optimum or near optimum solutions to the problems of engineering at the nanoscale. I answered his question (or at least commented on it) in three parts.

Firstly, I don’t think that biology has solved all problems it faces optimally – it would be absurd to suggest this. But what I do believe is that the closer to the nanoscale one is, the more optimal the solutions are. This is obvious when one thinks about it; the problems of making nanoscale machines were the first problems biology had to solve, it had the longest to do it, and at this point the it was closest to starting from a clean slate. In evolving more complex structures (like the eye) biology has to coopt solutions that were evolved to solve some other problem. I would argue that many of the local maxima that evolution gets trapped in are actually near optimal solutions of nanotechnology problems that have to be sub-optimally adapted for larger scale operation. As single molecule biophysics progresses and indicates just how efficient many biological nanomachines are this view I think gets more compelling.

Secondly, and perhaps following on from this, the process of optimising materials choice is very rarely, either in biology or human engineering, simply a question of maximising a single property like strength. One has to consider a whole variety of different properties, strength, stiffness, fracture toughness, as well as external factors such as difficulty of processing, cost (either in money for humans or in energy for biology), and achieve the best compromise set of properties to achieve fitness for purpose. So the question you should ask is, in what circumstances would the property of high strength be so valuable for an organism, particularly a nanoscale organism, that all other factors would be overruled. I can’t actually think of many, as organisms, particularly small ones, generally need toughness, resilience and self-healing properties rather than outright strength. And the strong and tough materials they have evolved (e.g. the shells of diatoms, spider silk, tendon) actually have pretty good properties for their purposes.

Finally, don’t forget that strength isn’t really an intrinsic property of materials at all. Stiffness is determined by the strength of the bonds, but strength is determined by what defects are present. So you have to ask, not whether evolution could have developed a way of making graphite, but whether it could have developed a way of developing macroscopic amounts of graphite free of defects. The latter is a tall order, as people hoping to commercialise nanotubes for structural applications are going to find out. In comparison the linear polymers that biology uses when it needs high strength are actually much more forgiving, if you can work out how to get them aligned – it’s much easier to make a long polymer with no defects than it is to make a two or three dimensional structure with a similar degree of perfection.

Lord of the Rings

As light relief after the last rather dense post, here’s one of the of the sillier exchanges from Monday’s round-up of events at the British Association meeting:

Quentin Cooper (compere of the event)
– I noticed that one of the speakers described Drexler’s book “Engines of Creation” as the “Lord of the Rings” of nanotechnology, is that right?

Me
– No, Engines of Creation is “The Hobbit” of nanotechnology, it’s short, easy-to-read and everyone likes it. “Nanosystems” is “The Lord of the Rings”, it’s long, dense, half the world thinks it’s the best book ever written and the other half thinks it’s rubbish.

Henry Gee (Nature magazine)
– Are you sure it’s not the Silmarillion?

Did Smalley deliver a killer blow to Drexlerian MNT?

The most high profile opponent of Drexlerian nanotechnology (MNT) is certainly Richard Smalley; he’s a brilliant chemist who commands a great deal of attention because of his Nobel prize, and his polemics are certainly entertainingly written. He has a handy way with a soundbite, too, and his phrases “fat fingers” and sticky fingers” have become a shorthand expression of the scientific case against MNT. On the other hand, as I discussed below in the context of the Betterhumans article, I don’t think that the now-famous exchange between Smalley and Drexler delivered the killer blow against MNT that sceptics were hoping for.

For my part, I am one of those sceptics; I’m convinced that the MNT project as laid out in Nanosystems will be very much more difficult than many of its supporters think, and that other approaches will be more fruitful. The argument for this is covered in my book Soft Machines. But, on the other hand, I’m not convinced that a central part of Smalley’s argument is actually correct. In fact, Smalley‚Äôs line of reasoning if taken to its conclusion would imply not only that MNT was impossible, but that conventional chemistry is impossible too.

The key concept is the idea of an energy hypersurface embedded in a many-dimensional hyperspace, the dimensions corresponding to the degrees of freedom of the participating atoms in the reaction. Smalley argues that this space is so vast that it would be impossible for a robot arm or arms to guide the reaction along the correct path from reactants to products. This seems plausible enough on first sight – until one pauses to ask, what in an ordinary chemical reaction guides the system through this complex space? The fact that ordinary chemistry works – one can put a collection of reactants in a flask, apply some heat, and remove the key products (hopefully this will be your desired product in a respectable yield, with maybe some unwanted products of side-reactions as well) – tells us that in many cases the topography of the hypersurface is actually rather simple. The initial state of the reaction corresponds to a deep free energy minimum, the product of each reaction corresponds to another, similarly deep minimum, and connecting these two wells is a valley; this leads over a saddle-point, like a mountain pass, that defines the transition state. A few side-valleys correspond to the side-reactions. Given this simple topography, the system doesn’t need a guide to find its way through the landscape; it is strongly constrained to take the valley route over the mountain pass, with the probability of it taking an excursion to climb a nearby mountain being negligible. This insight is the fundamental justification of the basic theory of reaction kinetics that every undergraduate chemist learns. Elementary textbooks feature graphs with energy on one axis, and a “reaction coordinate” along the other; the graph shows a low energy starting point, a low energy finishing point, and an energy barrier in between. This plot encapsulates the implicit, and almost always correct, assumption that out of all the myriad of possible paths the system could take through the hyperspace of configuration space the only one that matters is the easy way, along the valley and over the pass.

So if in ordinary chemistry the system can navigate its own way through hyperspace, what’s different in the world of Drexlerian mechanochemistry? Constraining the system by having the reaction take place on a surface and spatially localising one of the reactants will simplify the structure of the hyperspace by reducing the number of degrees of freedom. This makes life easier, not harder – surfaces of any kind generally have a strong tendency to have a catalytic effect – but nonetheless, the same basic considerations apply. Given a sensible starting point and a sensible desired product (i.e. one defined by a free energy minimum) chemistry teaches us that it is quite reasonable to hope for a topographically straightforward path through the energy landscape. As Drexler says, if the pathway isn’t straightforward you need to choose different conditions or different targets. You don’t need an impossible number of fingers to guide the system through configuration space for the same reason that you don’t need fingers in conventional chemistry, the structure of configuration space itself guides the way the system searches it.

This is a technical and rather abstract argument. As always, the real test is experimental. There’s some powerful food for thought in the report on a Royal Society Discussion Meeting “‘Organizing atoms: manipulation of matter on the sub-10 nm scale'” which was published in the June 15 issue of Philosophical Transactions. Perhaps the most impressive example of a chemical reaction induced by physically moving individual reactants into place with an STM is the synthesis of biphenyl from two iodobenzene molecules (Hla et al, PRL 85 2777 (2001)). To use their concluding words “In conclusion, we have demonstrated that by employing the STM tip as an engineering tool on the atomic scale all
steps of a chemical reaction can be induced: Chemical reactants can be prepared, brought together mechanically, and finally welded together chemically. ” Two caveats need to be added: firstly, the work was done at very low temperature (20 K) presumably so the molecules didn’t run around too much as a result of Brownian motion. Secondly, the reaction wasn’t induced simply by putting fragments together into physical proximity; the chemical state of the reactants had to be manipulated by the injection and withdrawal of electrons from the STM tip.

Nonetheless, I rather suspect that this is exactly the sort of reaction that one would say wasn’t possible on the basis of Smalley’s argument.

(Links in this post probably need subscriptions).

Nanotechnology at the British Association

The annual British Association meeting is the main science popularisation event in the UK, and not surprisingly nanotechnology got a fair bit of attention this year. The physics section ran a session on the subject yesterday morning. First up was Nigel Mason, who organised the physics part of the meeting this year and thus could give himself the best slot. He’s an atomic and molecular physicist who does scanning probe microscopy; his talk was a standard account of nanotechnology from the point of view of someone who’s got a scanning tunneling microscope and knows how to use it; from Feynman via the IBM logo and quantum corrals to some of his own stuff about imaging DNA. Next was Mark Welland, who runs the Nanotechnology Centre at Cambridge University. Once he’d calmed down after the first talk, which had upset him in all sorts of ways, not least by talking about Drexler in what he thought was an insufficiently critical way, he talked about his group’s work on silicon carbide nanowires, which if they do nothing else have produce some of the prettiest images to come out of current nanoscience. Then it was my turn. As Mark Welland said, making his excuses for leaving early, “I know what you’re going to talk about because I’ve read your book“.

Harry Kroto, Nobel Laureate for his co-discovery (with Richard Smalley) of buckminster fullerene, was talking about nanotechnology in the chemistry section in the afternoon, but I didn’t get a chance to see it as I was roped into a rather tedious panel discussion about how the public perceives physicists. The final event for me was an appearance in a discussion event compered by the (excellent) BBC radio science journalist Quentin Cooper. This brought me the chance to share a platform with a poet, a paleontologist, and the government’s chief scientific advisor, Sir David King. We also got some free beer, though to Sir David’s horror this was bottles of (american) Budweiser rather than pints of bitter. So I got a final chance to make my nanotechnology pitch, though Quentin Cooper was rather more interested in trying to prise an unwise comment from the famously undiplomatic King. He happily confirmed that he still thought that global warming was a bigger threat than terrorism, he didn’t deny the suggestion that he’d received a rebuke from 10 Downing Street for saying this in the USA , where it’s language not thought suitable for a servant of the government of a loyal ally, and he was smilingly gnomic about who he wanted to win the US presidential election.

The BA is all about publicity, so it’s worth asking how much interest this attention to nanotechnology stirred up. For my part, I think my talk got a good reaction, I signed the first copy of my book for a stranger, I did an interview for Radio New Zealand, and got the approval and interest of one of the BBCs best science journalists. And I now know who’s reviewing my book for Nature (Mark Welland). But I don’t think the subject really caught fire. Maybe a rather febrile summer of nanotechnology coverage has left media people starting to be a tiny bit bored with the word.

Not Enough (Yet)

I’ve just finished reading Enough, Bill McKibben’s jeremiad against genetic engineering, robotics and nanotechnology. The argument, as suggested in the title, is that we’ve done enough science, and we should stop developing nanotechnology and genetic engineering now, before we lose irrecoverable aspects of our humanity. It’s an important book, a compelling book in some ways, and I’m surprised by how much I agree with it. I accept a lot of McKibben’s arguments about what it is to be human, and like him I find that the posthumanists’ creed that we should happily trade in our humanity for some ill-defined post-human nirvana very unattractive.

I part company with McKibben at the point where he dismisses the claim that we need better science and technology to improve our current human condition. It’s easy, as McKibben does, to find anecdotes about the way in which, say, high technology agriculture has made the lives of third-world farmers worse rather than better. But another excellent book from my summer reading list – Enriching the Earth, by Vaclav Smil, makes it clear how much humanity as a whole depends on intensive agriculture, and in particular on artificial nitrogen fertilizers. For privileged inhabitants of rich countries, like myself and Bill McKibben, a rich diet based on non-intensive farming is entirely possible and indeed very agreeable. But for the majority of the world’s city dwelling population this simply isn’t an option. Smil’s book lays out the figures – non-intensive farming, without artificial fertilisers, could supply only 40% of the world’s current population at current average diets, a figure that would rise to 50% if everyone adopted a minimally nutritious but frugal diet.

This is just one example of the way in which we are currently existentially dependent on technology for the survival of the human race at current population levels. But the technology we depend on is not sustainable and has many well-known disadvantages – taking this example, artificial fertilisers are produced using a huge amount of fossil fuel based energy, with serious negative consequences like global warming, and the direct consequences of waste nitrogen fertiliser run-off on ecosystems are now well known. The world population is now starting to level off, and we do have it in our grasp to have a future in which the world has a stable population with a decent standard of living, obtained in a sustainable way. But to get to this point the technology we have now is not enough. We’ll need clean energy, clean water, better medicine, ways of cleaning up the environment and keeping it unpolluted. Nanotechnology should play a big role in all these developments.

A grand day out

I’ve been to London today, for two meetings, both about nanotechnology but with rather contrasting flavours. The morning saw me at a large TV production company, which is planning a three part series on nanotechnology for a national broadcaster. In the afternoon I was at the Department of Trade and Industry, with a couple of social science colleagues, including Stephen Wood, my coauthor on the ESRC report The Social and Economic Challenges of Nanotechnology. We were meeting the civil servant in charge of the DTI nanotechnology agenda, together with Hugh Clare, who is the director of the Micro/Nanotechnology network that the DTI is trying to establish with its ¬£90 million, to discuss how they would like to see the social science research agenda shaped.

This was an interesting glimpse into government thinking. While the Treasury lives by a rigorous creed of free markets and non-intervention, the DTI is doing its best to formulate and implement what’s very reminiscent of an old-fashioned industrial policy, using government-sponsored innovation to rescue the small remnants of the UK’s manufacturing industry, for which these mandarins showed rather a touching nostalgia. What worries me is the central problem of how you define nanotechnology. The DTI is very keen on network building, but these networks are self-selecting and not necessarily truly representative. If you put together a network, how do you know that these are the companies and organisations that are genuinely developing and using nanotechnology to make new products and businesses, rather than those that find nanotechnology a useful label for marketing or fund-raising purposes? Here’s where the kind of network analysis that social scientists are building could be really helpful.

How about the public perception issue? This is something that clearly deeply bothers the DTI, and there was palpable relief at the Royal Society report; clearly they were very comfortable with the modest extensions of regulation proposed in the report, and they seemed pretty confident that the government would simply accept the report and implement it in full. They’re seriously worrying about over-regulation driving not just manufacturing but research overseas, and they cite the example of the relocation of animal experimentation to Hungary. But again, I don’t sense much confidence about what to do with the public perception issue. Clearly no-one in government believes in the so-called “deficit model” of public engagement anymore. (This is the idea if that you simply explained everything clearly enough the scales would fall from the public’s eyes and they would eagerly embrace whatever new technology you were offering). Old fashioned views about risk analysis won’t wash either – you can produce as many risk tables as you like to demonstrate that crossing the road is quantitatively more dangerous than using a nuclear powered toaster to make your genetically modified toast, but if this conflicts with people’s deep intuitions they’ll trust the intuition.

I think it all boils down to visions, and this where I connect with my morning meeting. A company making a TV program for prime-time isn’t going to devote three slots to potential improvements in supply chain management, better impact toughness for engineering thermoplastics, and new avenues in textile treatment. It’s the big visions that are going to make popular TV, and at the moment its the environmentalists, on one hand, and the Drexlerites, on the other, that have those visions, deeply uncomfortable as those visions are to the sober people in government departments and the nanobusiness world. But people need those big narratives to make sense of and get comfortable with technological change, and if people don’t like the narratives that are on offer they’d better develop a compelling one of their own.