The Black Hole Evaporator Engine

In 1960 the American physicist Robert Bussard published a paper that some considered the solution to the problem of true interstellar travel. It was a proposal to use a special type of engine that did not require carrying vast quantities of fusion fuel, but instead would utilise the natural hydrogen fuel of the interstellar medium that is dispersed throughout the space between the stars. It would do this using a large magnetic funnel to scoop up the fuel and then direct it through a fusion reactor. The paper was titled ‘Galactic Matter and Interstellar Spaceflight” (Astronautical Acta, 6, pp.170-194, 1960) and it received wide coverage and enthusiasm.

Interstellar Ramjet (Rick Sternbach)

Interstellar Ramjet (Rick Sternbach)

However, in the years that followed it quickly became clear that there may be some fundamental problems with the idea of the interstellar ramjet that may make it unworkable. From a physics perspective, it was quickly realised that one of the problems with using hydrogen is that it has a very small cross section. Indeed, this is why modern day fusion reactor laboratories are attempting to ignite isotopes of hydrogen instead, such as deuterium and tritium. In addition, even if you could capture these interstellar protons, they would have such a substantial energy that they would first have to be moderated down to a lower energy in order to pass them through a fusion reactor.

Another issue identified was that in order to get started in the first place, some quantity of fuel would be required to get up to sufficiently high velocity, and so the vehicle mass would be far from small. Finally, calculations conducted by others suggested that as the starship moved through the interstellar medium, this material would act as a form of drag on the vessel, and so introduce inefficiency into the motion that was likely significant and even critical to the design.

Years later, an idea was to occur to one author (Kelvin Long) but based on speculative calculations conducted at the time on what happens when high energy particles are collided together. It was based on two developments in physics occurring at the time and it even led to a study undertaken by the International Space University in a post-graduate project titled “Project BAIR: The Black Hole Augmented Interstellar Rocket” by Andrew Alexander, co-advised by Kelvin Long.

The first was the construction of the European Organization for Nuclear Research, known as CERN or the large particle accelerator in Geneva, Switzerland.  One of the experiments in this facility was known as the Large Hadron Collider, which achieved its first collisions in 2010 with energy of 3.5 TeV. However subsequent to this it was upgraded to a 6.5 TeV beam until it shut down for further upgrades in 2018.

Prior to the high energy collisions, speculation was mounting in the literature from the field of General Relativity and its successor String Theory, that there may be extra-dimensions, and in particular four-spatial dimensions, with the implication that the Planck Length (and the associated Planck mass) could actually be much larger (and much lower) and would occur around the ~1TeV scale. This led to the speculation of what happens when two high energy particles collide, so that they collapse inside their Schwarzschild radius to produce a mini-black hole. If such a microscopic black hole was created, then according to the best theories it would rapidly decay completely via Hawking radiation effect. This caused a frenzy of discussions in the media and even a court case to try and prevent the LHC from switching on.

Whilst some were concerned over the possibility of generating mini-black holes on Earth, it also seemed possible that this physics effect may have an application to the interstellar ramjet, and in particular the issue of the very small cross section of hydrogen and the difficulty with causing it to undergo fusion at high energy. The idea that occurred was to allow the high energy protons to be magnetically scooped up as designed for the interstellar ramjet, but rather than try to moderate them down to lower energy or to capture them, to just allow them to naturally collide with each other, if a sufficient number density could be assembled. If the predictions of higher-dimensional physics were correct, then the collision of these protons would result in a collapse into a mini-black hole and the immediate evaporation of various particles via Hawking radiation. Some of these particles would be neutral so could not be directed magnetically, but some of them would have charge, and then if they could be channelled rearward, this would represent an effective exhaust. Hence the mechanism was called a Black Hole Evaporator Engine.

Black Hole Evaporator Engine (graphic by Adrian Mann, Concept invented by K. F. Long)

Black Hole Evaporator Engine (graphic by Adrian Mann, Concept invented by K. F. Long)

The idea was pursued a little, and it was even proposed that some of the particles could be injected into a large ring collider, much like CERN, so as to increase the probability of particle collisions. Subsequent calculations found that it was likely to be an inefficient propulsion mechanism, but the work done on this innovative project was not sufficient to rule out its plausibility.

As we seek to cross space in search of planets around other stars, it is clear that just like the stars, we are likely going to have to use similar energy mechanism to make this a reality. This includes fusion, which is the source of power at the heart of all stars and is responsible for their birth and lifetime through the stellar structure and evolution of the main sequence. But it may also be possible that the death of a heavy star, in its final state as a black hole, may also have lessons to teach us, in that a black hole may give us the drive power we need to explore the galaxy and beyond.

Shock Ignition ICF for Space Propulsion

The Sun is a giant fusion reactor, generating energy through the ignition of hydrogen fuels at the centre of its core, the radiation from which then takes around 50,000 years to travel through the opaque atmosphere to reach the outer envelope of the photosphere. Whilst a nuclear fission reactor will create lots of nasty radiation products which then have to be stored for tens of thousands of years, a nuclear fusion reactor on Earth would not have the same problems and is relatively clean whilst also offering the potential to produce reliable energy on the Earth for the national grid. If we can make it work in an Earth-based laboratory then this also brings the possibility of applying that same technology to a spacecraft propulsion system.

Picture3 (2).jpg

The key requirement to obtaining ignition in an inertial fusion capsule is through spherical convergence of a spherical fuel by hydrodynamic implosion to a state of what is called the fusion triple product. This species that the product of the confinement time, particle number density and the plasma temperature has to be greater than a certain number. When this occurs, then the fuel will ignite and generate a self-sustaining reaction and the goal is to produce more energy out of the capsule than went into creating it, so called energy gain. This criteria is also known as the Lawson criteria, named after John Lawson who first derived it in 1955 and then published it in a public document titled “Some Criteria for a Power Producing Thermonuclear Reactor” (Proceedings of the Physical Society, Section B, 70, 1, pp.6 - 10, 1955).

Conventional direct drive method of ‘hot spot’ ignition inertial confinement fusion

Conventional direct drive method of ‘hot spot’ ignition inertial confinement fusion

In conventional inertial confinement fusion, this state is achieved by impinging multiple laser beams onto the surface of the capsule, what is known as an ablator shell. These lasers will then mass ablate the surface causing a rocket effect and the transfer of momentum so that the spherical system starts to move inwards. The mass ablation also leads to a full ionization of the surface and so the creation of a corona layer of ions and free electrons; a plasma state. The goal is to compress the capsule up spherically and with high symmetry, until a high compression state is achieved at the centre and where the so called ‘hot spot’ ignition occurs under direct drive. The problem with this approach is that some of the electrons generated at the surface may be energised by the laser electromagnetic wave, and so accelerated up to suprathermal energies. This means that they will depart from a Maxwellian distribution and travel inwards into the fuel, ahead of the full compression. As they enter the fuel, they deposit their energy there and heat it up, so that it wants to expand, and this is an inefficiency on the implosion.

To help to mitigate some of these issues, at the National Ignition Facility (NIF) in San Francisco the team uses a ‘Hohlraum’ or radiation cavity, so that the implosion occurs via an indirect drive. That is, the lasers (192 in total) do not direct impinge on the capsule surface, but instead impinge onto the surface of a gold cavity that surrounds the capsule, and this then generates an x-ray heat bath which surrounds the capsule to ensure uniform symmetry.

Indirect drive ‘hot spot’ method of inertial confinement fusion

Indirect drive ‘hot spot’ method of inertial confinement fusion

There is however another way of achieving energy gain, which has received little attention in the literature but most importantly for space propulsion, holds the promise of high gain. And by high we are talking about an order of magnitude higher than can be achieved in ‘hot spot’ ignition. The method is known as ‘shock ignition’ and some of the first to propose it included R Betti et al in “Shock Ignition of Thermonuclear Fuel with High Areal Density” (Physical Review Letter, 98, 155001, 2007) and L. J. Perkins in his paper titled “Shock Ignition: A New Approach to High Gain Inertial Confinement Fusion on the Ignition Facility” (Physical Review Letters, 103, 045004, 2009).

In shock ignition the primary drive pulse is initially used to slowly compress the fuel to a high density and a pressure of several hundred Mbar due to spherical convergence amplification, but under the threshold required for ignition. The return shock wave then centrally reflects and begins to travel outwards, but before this happens a second ignition pulse is sent into the capsule and the ignitor shock eventually collides with the outgoing return shock from the first pulse, sending a collision shock back inwards and thereby heating the central hot spot to ignition conditions. So shock ignition depends upon the dynamics and interaction of three shock waves, the initial return shock, the ignitor shock and the collision shock. This is a simple scheme and it does not require the use of any short pulse lasers; in the way that an alternative method called fast ignition would need for example. This then helps to minimise the laser-plasma instabilities.

One of the neat things about shock ignition is that the capsules have an unusually large ablator shell. This means that any suprathermal electrons generated in the corona will deposit their energy into that ablator shell and they will not make it into the fuel which would otherwise cause expansion. The result of this is that the energy deposition into the ablator shell therefore contributes towards the implosion, resulting in an amplification of the pressure pulse - and this is why a much higher gain is possible in principle.

For a normal NIF type hot spot ignition design capsule, the irradiation of the surface will also lead to Rayleigh-Taylor instabilities and will increase proportional to the capsule In Flight Aspect Ratio (IFAR) which is a measure of the average shell radius to its thickness (~2 for shock ignition, c.f 5 for conventional thin walled designs used in other methods). But for shock ignition, the IFAR is maintained low due to keeping on a low adiabat associated with the low implosion velocity promoting stability during the acceleration phase, this means that the RT instabilities for shock ignition are much reduced compared to conventional ICF.

Like many of the inertial confinement fusion capsule designs, they are untested to the point of ignition and gain. But it gives hope for the future that we have so many different types of designs to experiment with, to ensure we get the performance that we need for either an Earth-based reactor or a space based propulsion system. It is likely, that shock ignition designs will have a key role to play as we seek to optimise performance and mitigate losses whilst we are attempting to create a star in a reaction chamber on Earth or in space.

Extraterrestrial or Hyperdimensional Hypothesis

Recently, the nuclear physicist and writer Stanton Friedman passed away. He was a prolific author of books on the unidentified flying object phenomenon and gave hundreds of lectures. His belief, was that alien visitations were true and he was a proponent of the ‘nuts and bolts’ perspective, in that alien technology is here today and governments were aware of this. This is known as the Extraterrestrial hypothesis, and it makes the claim that observation of craft in the sky or claims of visitations and abductions, are best explained by the acceptance that a non-human intelligence has travelled here on an alien starship purely to visit the planet.

Many follow his view point and he leaves behind a trail of dedicated researchers who also subscribe to this perspective. The idea that aliens are visiting our planet from another planet around another star, is attractive to many, and some want it to be true. This could be because they see it as a possible solution to the problems of our civilization and our inability to solve them ourselves, or it could just be because the idea is cool. That said, not all subscribe to this opinion.

A clip from ‘Earth versus the Flying Saucers’ movie, 1956 (Columbia Pictures)

A clip from ‘Earth versus the Flying Saucers’ movie, 1956 (Columbia Pictures)

The French/American researcher Jacques Vallee takes a radically different perspective, despite the fact that early on in his career he had supported the extraterrestrial hypothesis. But his views began to change, and a factor in this was the absurdly large number of visitation cases being reported, which made the Earth and its people appear to be like the equivalent of the local interstellar zoo for travelling alien tourists.

Instead, Vallee advocated for a different idea, known as the interdimensional hypothesis. This holds the view that visitations originate from other realities and dimensions that coexist alongside our own reality, perhaps in a multiverse of universes. In particular, the fact that our history is littered with ideas of mythological or supernatural creatures (goblins, elves, giants, dwarfs) might suggest that we are in fact witnessing a psychological phenomenon that has been with us for as long as human beings have existed.

Vallee makes some chief objections to the Extraterrestrial hypothesis, which were first laid out in his paper titled “Five Arguments Against the Extraterrestrial Origin of Unidentified Flying Objects” (Journal of Scientific Exploration, 1990). It is worth listing these arguments in full and to make some counter-point comments as a devil’s advocate argument:

  1. That unexplained close encounters are far more numerous than required for any physical survey of the Earth; Although this does not take into account the vast number of planets around other stars that could be inhabited. If the galaxy is teaming with life, then the numbers of reports would correlated with a crowded galaxy. That said, it seems unlikely that many different species would be visiting Earth independent of each other, without interaction, communication and potential conflict among themselves over say visitation rights, for which we would then become aware of.

  2. That humanoid body structure of the claimed aliens is not likely to have originated on another planet and is not biologically adapted to space travel; It could be that a species downloads its consciousness into a biological form that is grown at the destination, in an attempt to increase the interaction and encounter potential, by making efforts to appear like humans.

  3. That the reported behaviours in many of the abduction reports contradicts the hypothesis of genetic or scientific experimentation on humans by an advanced intelligence. This would be true, except for the situation where visiting aliens had a form or anatomy completely different to our (as different as say a jelly fish to us) and they had no idea how we operated because we were so alien to them. However, if the aliens are ‘advanced intelligence’ and have figured out star travel, then this implies they should easily figure out our anatomy without the need for crude experimentation techniques.

  4. That the extension of the phenomenon over all recorded human history suggests that it is not a contemporary phenomenon. Unless one considered that all past observations and reports, pre-dating the development of modern industrialised society, were just stories due to the lack of education, informed opinion and the ability to rationally comprehend observations and also record them reliably. But then how do we explain that not only did the reports continue, but they increased?

  5. That the apparent ability of unidentified flying objects to manipulate space and time might suggest radically different and richer alternatives. The reported flight capabilities of such craft certainly go beyond what the existing or projected aerospace capabilities of our modern technological societies can achieve. And there is sufficient data, observations from pilots and radar measurements from radar stations, to clearly demonstrate that objects are being observed. The question is, are people seeing what they really think they are seeing and is what they see external to themselves or an image generated internally to the brain?

galaxycluste.jpg

Whatever ones views are on the Extraterrestrial or Interdimensional hypothesis, it is clear that from the vast number of reports annually across the globe that some strange phenomena is occurring. Perhaps none of these hypothesis are true, but instead we are witnessing a psychological phenomenon which results in an as yet unknown symptom of our brain tissues exposure to certain technologies. We certainly live in a technological world, and electromagnetic fields are moving through the airspace almost as a constant background sea upon which our consciousness now swims. Who is to say that this isn’t having an effect on our brain, causing delusions, hallucinations or merely manifesting our best fantasies or worst nightmares but as a waking dream state?

It is clear however that something is going on, and it is wrong for governments to take the attitude they do, which tends to be dismissive of peoples claims. If this is not visitations by aliens from other stars or from other dimensions, then we could be looking at a global phenomenon of a form of mania, in which case this should also be of interest to governments, who are charged with looking out for the well-being of their populations. One thing is for sure, the conversation is not likely to end any time soon and the reported sightings will surely continue into the future.

Roddenberry's Starships: Art vs Science

The television series Star Trek, was created by Gene Roddenberry. Since that original series which debuted in 1966 for three seasons on NBC in the United States, it has produced many spin-off series. This includes the original series (1966 - 1969), The Next Generation (1987 - 1994), Deep Space Nine (1993 - 1999), Voyager (1995 - 2001), Enterprise (2001 - 2005) and Discovery (2017 - present). This has been an amazing franchise which has also produced 13 motion picture films.

A warp drive Starship from the Star Trek Universe (Paramount Studios)

A warp drive Starship from the Star Trek Universe (Paramount Studios)

A key element of the Star Trek universe is the starships themselves, based on some undefined warp drive technology that manipulates space and time in a way that allows it to transport across the galaxy and beyond, and still be home for tea. From a physics perspective this appears to break all of the known laws as we understand them. In addition, the engineering challenges with constructing such a vast machine are daunting to say the least. As the episodes of Star Trek rolled on year on year, there were efforts by the production team to introduce a science basis behind the technology. This led to the invention of an entirely new language with mentions of technologies such as ‘dilithium crystals’, ‘tractor beams’, ‘replicators’, ‘universal translators’, ‘cloaking devices’, ‘deflector shields’ and a whole manner of other ideas.

It is interesting to note that whilst some of these technologies are far from being a part of the real world of science, others have actually maturated into actual devices, and the observation that science fiction inspires science as much as science inspires science fiction is an interesting one. Indeed, far from the sciences being seen as rigorous and the arts creative, it has been said that science needs the creativity to flourish and art needs rigour to have value.

So it was that in 1994, the Mexican physicist Miguel Alcubierre produced a paper titled “The Warp Drive: Hyper-Fast Travel within General Relativity” (Classical & Quantum Gravity, 11, L73 - L77, 1994), in which the author demonstrated how the General Theory of Relativity, allowed in principle for space to expand and collapse in a way analogous to a warp drive. This paper was such a seminal publication for the field, that it literally created an entire new genre of theoretical physics research. Even more amazing, when we consider that at the time Alcubierre was a graduate student. Although many of the physics issues for a workable warp drive still look prohibitively difficult, the fact that we can realise so much about this theoretical construct so earlier on in the birth of the idea, gives some optimism that the research may lead to something interesting at least.

Mathematical shape function visualised of the Alcubierre warp drive metric of General Relativity

Mathematical shape function visualised of the Alcubierre warp drive metric of General Relativity

However, there is an intriguing history of the design of a Starship from the Star Trek universe that many people may not be aware of, which is that, whilst in conventional engineering design shape tends to follow function, it is the other way around thanks to Gene Roddenberry, in that function follows shape.

The history of the creation of this starship, is described in the book by Stephen E Whitfield and Gene Roddenberry titled “The Making of Star Trek” (published by Ballantine Books, 1968). In this book the authors detail the design briefing specified by Roddenberry for the U. S. S Enterprise:

We’re a hundred and fifty or maybe two hundred years from now. Out in deep space, on the equivelent of a cruiser-size spaceship. We don’t know what the motive power is, but I don’t want to see any trails of fire. No streaks of smoke, no jet intakes, rocket exhaust, or anything like that. We’re not going to Mars, or any of that sort of limited thing. It will be like a deep-space exploration vessel, operating throughout our galaxy. We’ll be going to stars and planets that nobody has named yet...I don’t care how you do it, but make it look like its got power.

Then, so it was that the set designers came up with the gorgeous concept that we see in the television show and movies today. In the subsequent discussions with the set designers, when they made comparisons to the existing space program, or Buck Rodgers or Flash Gordon, the response of Roddenberry was “This we will not do”. The same response was given when comparisons were made to concepts from companies like North American, Douglass and TRW. What Roddenberry seemed to be reaching for was an acknowledgement that this machine was in the far future, and more advanced than even the most visionary thinking scientists of the day were conceiving.

Roddenberry wanted something that was beyond the reach of existing paradigms. This is consistent with the second law of the science and science fiction writer Arthur C Clarke who said “The only way of discovering the limits of the possible is to venture a little way past them into the impossible”. It is interesting to note that Roddenberry had previously had extensive discussions with Clarke and so was likely familiar with this law given it was published in his book “Profiles of the Future” published in 1962.

So it was that the team produced the beloved Starship concept of Star Trek, driven by an engine called a warp drive for which nobody could describe how it really worked. That warp drive, seems to have come out of the requirement not to have any smoke, flames or exhaust. From a scientific extrapolation perspective this made no sense at all. But form an artistic perspective it was pure brilliance and perhaps not something science would ever had created on its own; science needs the creativity of the arts.

This demonstrates the value of interdisciplinary thinking and the risks of working only in specialised areas. It is clear that to progress technologically science needs the arts. Would the idea of a warp drive ever been realised if it had not been conceived from this artistic background? We will never know for sure, but as long as we practice both in unison, as a form of joyous dance, the novelty produced from our species knows no bounds.

Starship Endeavour 1.0

Starship Endeavour is a part of the Project Icarus suite of starship design solutions, as a part of an effort to redesign the British Interplanetary Project Daedalus flyby probe from the 1970s. It builds on earlier work for a single stage engine design known as Starship Resolution. However, the main problem with Resolution was its elongated boost duration of 15 years, and it was considered a major risk to the mission success, given the potential for issues like thrust structure fatigue and general system reliability.

To achieve this reduction in boost duration, Starship Endeavour instead was to employ a quintic engine design, that is with 5 engine bells, similar to say the Saturn V moon rocket. A trade-study had first been conducted to see the effect of increasing the number of thrust chambers or engine nozzles and it was found that by moving to more engines (parallelising the thrust profile) a significant drop in the boost phase was demonstrated, but all for a constant cruise velocity of 4.84% of the speed of light.

Starship Endeavour 1.0 Five Engine Bell (Quintic) Design Concept (Adrian Mann)

Starship Endeavour 1.0 Five Engine Bell (Quintic) Design Concept (Adrian Mann)

For example, whilst a single stage engine might take 13.2 years (skeleton concept) at a Thrust of 0.46 MN and a mass flow rate of 0.043 kg/s for 150 Hz pulse frequency, by having two engine stages the boost duration would be reduced to 6.6 years with a Thrust of 0.92 MN and a mass flow rate of 0.08 kg/s. Moving to a 3, 4 or 5 engine stage would reduce the boost down to 4.4 years, 3.3 years or 2.3 years respectively. However, this was for a skeleton concept in the trade-study rather than the full design configuration, but the benefit to engine staging was a clear route to burn time reduction.

The Endeavour Starship concept would also employ Deuterium / Helium-3 fuel but would only require 22,000 tons fuel for the acceleration phase and a further 4,000 tons for the deceleration phase (as opposed to the 50,000 tons of Project Daedalus). It would also utilise the Daedalus 2nd stage capsule designs but it would burn at a pulse frequency of 150 Hz, which was the same for the earlier Starship Resolution. It would exhibit an exhaust velocity of 9,210 km/s and accelerate at 0.13 m/s2 reaching a cruise speed of 13,300 km/s or 4.44% of the speed of light.

Starship Endeavour with Cylindrical Propellant Tanks (Adrian Mann)

Starship Endeavour with Cylindrical Propellant Tanks (Adrian Mann)

The engine would produce a total thrust of 1.99 MN and a Jet power of 9.16 TW. The spacecraft would accelerate for 3.2 years to a distance of 3,747 AU. It would then cruise for a further 93.8 years over a distance of 263,161 AU. It would then decelerate for 2.9 years over a distance of 11,614 AU. It would complete its mission by arriving at its destination target in a total time of 99.95 years, under the 100 years project requirement.

Although Starship Endeavour looked more credible. the addition of the extra engines also created a more complicated radiation environment. In particular, one of the reasons for the original Daedalus team choosing a Deuterium/Helium-3 fuel for their design was because the reaction is aneutronic, and produces protons and helium-4 particles, both of which can be directed magnetically for thrust generation. However, it was pointed out in a paper by R. A. Hyde titled “A Laser Fusion Rocket for Interplanetary Propulsion” (IAF-83-396, 34th IAC, Budapest, October 1983), that Deuterium-Deuterium self-burn reactions within the fuel will lead to a large fraction of both low and high energy neutrons which will reduce the power. Hyde’s calculations showed that self-burn within the fuel will account for about 15% of the reactions, and then producing neutrons either directly or indirectly through Deuterium-Tritium reactions.

He also commented that any neutron capture by Helium-3 will produce Tritium and most of this will burn, even in the outer reaches of the pellet. Further, Hyde stated that at temperatures relevant to DD or DHe3 burn (around 100 keV) there would be copious production of x-rays due to bremsstrahlung radiation. For the Endeavour, the neutrons and x-rays presented a problem, not just for the engine bell, but also for the coupling between each of the engine bells now that there were 5 present in the design.

For some time the Endeavour design looked like it would fail. But two key design improvements saved it. The first is the adoption of a cleaver capsule design to ensure that the high energy neutrons and x-rays were adequately captured; this will be discussed in a later post. The second is the adoption of a radiation shield, based loosely on the Project Icarus Firefly design produced by Michel Lamontagne in earlier work titled “Heat Transfer in Fusion Starship Radiation Shielding Systems” (JBIS, 71, pp.450 - 457, 2018). This work is now being adopted into a redesign of the Endeavour Starship but with a 4-engine bell configuration, and this will be discussed in a later post.

The design of starships is far from easy. It involves many complex physics issues but also engineering issues in order to demonstrate that something is practical. Physics and Engineering are the two first hurdles to move towards a credible design. What comes after that is economics, and certainly the construction of spacecraft as ambitious as Daedalus, Resolution or Endeavour will not come cheaply. What is important, is to be able to justify those costs by the primary mission benefits but also the secondary society benefits. Ultimately, this has to be the test of all new technologies if we are to devote resources to their pursuit. The stars is no exception.

Starship Resolution

The Project Daedalus study ran from 1973 to 1978 and resulted in a systems integrated study of a starship design that was unlike anything that had been undertaken previously. However, years after the study it became apparent that there were many potential problems with the Daedalus design which might result in a mission failure, many of which the team was themselves aware of and they wrote about this in a 1984 review paper by Alan Bond and Anthony Martin titled “Project Daedalus Reviewed” (JBIS, 39, pp.385 - 390, 1986).

Some of the problems identified includes excessive fatigue on the thrust chamber with the high repetition rate of 250 detonations per second, the difficulty with mining Helium-3 fuel, the use of electron beams as the main energy driver for the detonations, and the production of x-ray radiation and high energy neutrons from self-burn reactions (e.g. Deuterium/Deuterium) inside the fuel. Others also identified other issues not identified by the Daedalus team, such as the use of a Deuterium/Tritium trigger at the centre of the fuel pellets, which due to Tritium decay produce substantial heat.

There was also a desire to reduce the overall mass of the system, reduce the environmental conditions and to significantly simplify the design. This resulted in a realisation of running the numbers on just using a Daedalus 2nd stage only, but also allowing for the additional fuel so as to bring the spacecraft into full orbital insertion from reverse engine thrust deceleration at the destination target; Project Daedalus was a flyby probe only and did not decelerate.

This resulted in a design concept called Starship Resolution, which was presented by Kelvin F Long, Richard Osborne and Pat Galea in a report titled “Project Icarus: Starship Resolution Sub-Team Concept Design Report” (Project Icarus Internal Report, December 2013).

Project Icarus Starship Resolution (Adrian Mann)

Project Icarus Starship Resolution (Adrian Mann)

To reduce the environmental conditions the spacecraft would detonate capsules at a rate of 150 Hz (instead of the 250 Hz Daedalus) and it would utilise 20,700 tons of Deuterium/Helium-3 fuel for the acceleration, followed by 3,900 tons of Deuterium/Helium-3 for the de-burn. It would carry 12 propellant tanks for the boost and 4 tanks for the de-burn. It would use the second stage capsule designs of the Daedalus concept, which were 1 cm in diameter and 0.288 grams in mass. It would have a mass flow rate of 0.0432 kg/s for both the boost and the de-burn and it would exhibit an exhaust velocity of around 9,210 km/s.

In order to calculate the spacecraft performance in detail and with confidence, a Fortran program was constructed which firstly modelled the Daedalus design as a form of numerical validation of the model. This was then applied to the new design of Starship Resolution. The calculations showed the spacecraft would accelerate for 15.18 years, followed by a cruise phase of 81.47 years, then a de-burn phase of 2.86 years, bringing the spacecraft to its target destination in 99.52 years, which was less than the 100 years Project Icarus requirement.

Project Icarus Starship Resolution (Adrian Mann)

Project Icarus Starship Resolution (Adrian Mann)

After the acceleration phase the spacecraft would achieve a cruise speed of 14,481 km/s or 4.83% of the speed of light. It would produce a Thrust of 0.398 MN and a Jet Power of 1.832 TW. It would deliver its 150 tons payload mass (instead of the 450 tons mass of Daedalus) to its target, where it would deploy orbiters, atmospheric penetrators and ground landers onto the local planets and moons of that system. This would enable a far more in depth study of astrophysics, planetary science, geology, and astrobiology than could ever be achieved through a flyby probe alone.

However, the main problem with the Starship Resolution design was its 15 years burn time. Considering the high risk of sub-system failures, this was deemed a substantial risk to the success of the mission, and it was desirable to reduce this significantly. This resulted in a new design called Starship Endeavour, which will be discussed in a later post.


A key fact to take away from the Starship Resolution design, is that it demonstrated that it was possible to reduce the mass and complexity of the Project Daedalus design. It also demonstrated that full deceleration was possible and therefore an interstellar flyby probe was difficult to justify. In particular, if the cost of such a mission ends up being a significant cost of the global economic output, then there is an argument that one may as well just build a larger ground or space based telescope or even a space interferometer. Making the probe fully decelerate into local orbit, would permit such much more science value, and even the biggest interferometer would find it difficult to compete with that value.



Starship Leviathan

In 2012 a paper was published for a conceptual design study relating to Project Icarus. The concept was called the ‘Leviathan’, and the idea was to maximise performance on the dv for both the acceleration and deceleration phase, but also to minimise maturity timescales for development by minimising the amount of individual fuels. The paper was written by K. F. Long, A. Crowl, A. Tziolas and R. Freeland and was titled “Project Icarus: Nuclear Fusion Space Propulsion & The Icarus Leviathan Concept” (Space Chronicle, 65, 1, 2012).

Project Icarus Leviathan Starship (Michel Lamontagne)

Project Icarus Leviathan Starship (Michel Lamontagne)

In particular, one of the issues with the historical 1970s Project Daedalus design was its 50,000 tons of Deuterium and Helium-3 fuel, the latter of which would constitute 30,000 tons of the total fuel mass. So instead, Leviathan would only utilise 15,000 tons of Helium-3, but would make up the difference by also utilising other fuels. It was thought that this might ease the system architecture requirements.

The acceleration would be started by the use of 25,000 tons Deuterium-Deuterium fuel which would burn for 0.5 years taking the spacecraft up to 2.04% of the speed of light. Alternatively, it would use anti-proton induced catalysed fusion reactions on the Deuterium. It would then switch to a Deuterium-Helium-3 burning reaction, with 10,000 tons of fuel, which it would burn for 1.0 years taking the vehicle up to a cruise speed of 4.25% of the speed of light. This would enable the mission to be completed in under a century (with the deceleration included), delivering a 150,000 tons payload.

The deceleration phase would involve a proton/Boron engine burn of 3,000 tons of fuel, for a duration of 0.5 years taking the cruise speed back down to 3.87% of the speed of light. The vehicle would then deploy a Medusa sail to bring the total speed down to 1.77% of the speed of light; this is a large spinnaker based sail design based on the ideas of J. C. Solem, such as in his published paper “Nuclear Explosive Propulsion for Interplanetary Travel: Extension of the Medusa Concept for Higher Specific Impulse” (JBIS, 47, pp.229-238, 1994). The remaining deceleration would be achieved via a MagSail system, that pushes back against the outgoing charged particles and electromagnetic fields of the local stellar wind. This would bring the vehicle speed down to around 1% of the speed of light.

Eventually the spacecraft would arrive at its assumed Centauri A/B target. It would also deploy an on-board maser to eject several 1 tons Starwisp probes into the local stellar system for multi-planetary monitoring to ensure full coverage of all three stars and their associated planets. This would be based on similar technology to that suggested by R. Forward in his paper “Roundtrip Interstellar Travel Using Laser-Pushed Lightsails” (J. Spacecraft & Rockets, 21, pp.187-195, 1984).

Just like the Daedalus starship, its likely that design concepts like Leviathan do not represent the vehicle configurations we eventually send to the stars. Indeed, one of the criticisms of the Leviathan concept is its multi-modal system also allows for many failure modes. However, it is fun to consider these ideas and to think about how different technologies can be mated together. In the end, it is the integration of different technologies to high efficiency that will make or break any starship design.

Leviathan_4.jpg

Simulated Reality

Many have speculated that we may be living in a simulated reality. That is to say, that we are manifested constructs in a numerical program, operated and designed by beings far smarter than us. This is an interesting idea and for anyone that is familiar with computer programming it is not so far-fetched.

In particular, for a physicist working on building a numerical model for a physical system, they will be faced with coding up equations that time march in discrete steps called cells that adequately model the system, and the physical properties in and out of each cell are calculated by say using finite difference numerical schemes or others. You can literally sit there and watch the calculation scroll down the screen, in a manner not too dissimilar to the terminal screens visualised in the Matrix films. Perhaps the calculation only takes a few minutes, or perhaps it takes hours or days. But in the end, once the computation is completed, something has been modelled and simulated and then as a scientist you will scrutinise the results.

687752767_preview_matrix.gif

An interesting development of our modern society is the community of gaming. These started at a very basic level with limited processing power and interaction potential, and then they have accelerated to the point where you can put on a visor and barely not know the distinction between that simulated gaming world and the actual world where you physically exist.

An incredible example of what is possible is the God of War produced by Santa Monica Studio for the PlayStation 4 series. Based loosely on Greek and Norse mythology, you become a God doing battle with other Gods. Along the journey, the player will encounter monsters of all sorts.

God of War

God of War

Another example is the game Galactic Civilizations produced by Stardock for Microsoft Windows. In this game the player gets to explore the planets and beyond using your own spacecraft, encountering other species. Another amazing example is Eve Online, produced by CCP Games. It has a scale and complexity that boggles the mind, as players compete and engage in large scale space warfare. The game takes the player right out into the Milky Way 21,000 years into the future. We have come a long way since board games like Monopoly and Dungeons and Dragons.

Eve Online

Eve Online

These sorts of games allow one to invent any species that we wish to, and then to enter those worlds and experience it like they really did exist. Only subtle errors in the simulation, a result of the currently limited technological capability of what we can program, visualise and trick our sensors over, give you a reminder that this really is just a game.

But what about the future? If the level of gaming is where it is today, where will it be in 10 years or 100 years or even 1,000 years from now. It seems quite possible that in the distant future, we will have the technological ability to create all of the fantasies of our best hopes and dreams but also of our worst nightmares. As we then continue to converge towards that technological-biological symbiosis it may even be possible that we could get hurt.

One of the things that is intriguing about our own mythologies is that although we tell our children certain things exist, we know as adults that we are really just telling stories. That is, about wizards, dwarfs, elves, giants, dragons, fairies or whatever it is our minds can conjure. Yet with this increasing convergence, and ultimately what will be an inability to tell the difference between the real world and the simulated world, everything that is in our children’s stories will come into existence. What are the implications of this? Are we heading ourselves towards a cliff-edge unable to stop the magnetic pull of technology upon us? And as we fully immerse ourselves in this world what does it imply for free will? Indeed, if our actual existing reality is just a simulation, are we really players in the game or constructs generated by a meta-mind for the benefit of players far in excess of our intelligence?

The science and science fiction writer Arthur C Clarke said that “Any sufficiently advanced technology is indistinguishable from magic”. Our universe and our very existence appear to be a miracle of nature. But is it really just the physical embodiment of someone else’s technology? And for the purposes of our existence, would it really matter?

SETI: Drake Versus Fermi

The Fermi Paradox is a name given to a problem first proposed by the Italian physicist Enrico Fermi over lunch one day. Fermi had observed that any basic assessment of the number of stars and assumed planets with chemistry and organisms for life, and their associated timescales, suggested that if intelligent extra-terrestrials did exist in the galaxy, then statistically they are either here now or have been here in the past.

The Drake equation is a name given to a formula expression of multiplicative terms, to describe the probability of life evolving on another planetary biosphere independently. It is named after the American astronomer Frank Drake. The terms include things like the rate of star creation in the galaxy, the average number of planets around stars that might support life, the fraction of those planets that develop life, the fraction that develop intelligent life and the lifetime of any such civilisations that may develop communications technology to eventually transmit into deep space. However, the final term, designated ‘L’ makes a rather grand claim that is worth exploring a little further.

Project Cyclops

Project Cyclops

A key study that underpinned this equation was Project Cyclops in 1971, written by B M Oliver, J Billingham and others with the title ‘Project Cyclops, A Design Study of a System for Detecting Extraterrestrial Intelligent Life”, (NASA-CR-114445). Some of the conclusions of this seminal work included “It is vastly less expensive to look for and to send signals than to attempt contact by spaceship or by probes”, and “The cost of a system capable of making an effective search, using the techniques we have considered, is on the order of 6 to 10 billion dollars, and this sum would be spent over a period of 10 to 15 years”, and “The search will almost certainly take years, perhaps decades and possibly centuries”. When reading this report, and other papers that came later, it did seem to converge on a conclusion by the SETI community that starship travel was not likely possible and this does indeed seem to be the view of Frank Drake.

It is interesting that the term in the Drake equation ‘L’ does seem to imply that any advanced extraterrestrial civilisation would only attempt to reach out to other civilisations in the Cosmos by transmitting radio (or optical) beacons. No consideration is given to the idea that instead they may choose to build a starship and travel across that distance and interact on a physical level. This brings up an interesting examination on the consistency of the thinking of both Drake and Fermi and it would appear there are two interpretations possible.

Interpretation (1); Fermi’s observation that they should be here (or have been here) yet we don’t see any, is suggestive of the conclusion that any advanced ET would make contact by long-distance transmissions, and so therefore is completely consistent with the Drake equation.

Interpretation (2); Fermi’s observations that they should be here appears to be predicated on the idea that interstellar travel must be possible, and so on this basis it is not consistent with the Drake equation and in fact is in competition with it. This is because it implies that there is a further term that needs to be added to the Drake equation that takes account of interstellar diffusion by starships.

Well, it is up to each individual to come to their own conclusion. But it is worth noting that the physics basis for interstellar travel was first demonstrated theoretically by L Shepherd in a Journal of the British Interplanetary Society publication titled ‘Interstellar Flight’ (JBIS, 11, pp.149-167, 1952). Then in the 1970s the Project Daedalus team, went on to design over five years an actual starship concept that was credible in principle, as a proof of existence theorem. Their conclusion was that if they could conceive of such a machine at the outset of the space age, then in the future centuries it is likely that we could do much better and so interstellar travel was possible. Alan Bond and Anthony Martin, members of the same Daedalus team also went on to design full world ships in the 1980s, with papers titled “World Ships - Concept, Cause, Cost, Construction and Colonisation” (JBIS, 37, pp.243-253, June 1984) and “World Ships - An Assessment of the Engineering Feasibility” (JBIS, 37, pp.254-266, June 1984). This work demonstrated that not only was reconnaissance by interstellar probes likely possible, but so too was colonisation.

Many other studies have also been conducted to support this conclusion and it is curious that many in the SETI community seem to hold onto the ‘starships are impossible’ mantra, almost as a form of dogma. Only time will tell who was right.

Daedalus Starship (Don Dixon)

Daedalus Starship (Don Dixon)









Interglacial Periods in History

During the history of Earth there have been five major ice ages, and we are currently in the Quaternary Ice Age at this time, which spans from 2.59 million years ago. Within the ice ages are sub-periods known as glacial and interglacial periods.

Recent measurements of the relative Oxygen isotope ratio in Antarctica and Greenland show the periods of glacial and interglacial periods throughout history over the last few hundred thousand years. This is a measurement of the ratio of the abundance of Oxygen with atomic mass 18 to the abundance of Oxygen with atomic mass 16 present in ice core samples, 18^O/16^O, where 16^O is the most abundant of the naturally occurring isotopes. Ocean water is mostly comprised of H^2-16^O, in addition to smaller amounts of HD-16^O and H^2-18^O. The Oxygen isotope ratio is a measure of the degree to which precipitation due to water vapour condensation during warm to cold air transition, removes H^2-18^O to leave more H^2-16^O rich water vapour. This distillation process leads to any precipitation to have a lower 18^O/16^O ratio during temperature drops. This therefore provides a reliable record of ancient water temperature changes in glacial ice cores, where temperatures much cooler than present corresponds to a period of glaciation and where temperatures much warmer than today represents an interglacial period. The Oxygen isotope ratios are therefore used as a proxy for temperature changes by climate scientists.

The Vienna Standard Mean Ocean Water (SSMOW) has a ratio of 18^O/16^O = 2005.2×10-6, so any changes in ice core samples will be relative to this number. The quantity that is being measured, δ^18O, is a relative ratio and is calculated as follows in the units of % parts per thousand or per mil.

The change in the oxygen ratio is then attributed to changes in temperature alone, assuming that the effects of salinity and ice volume are negligible. An increase of around 0.22% is then defined to be equivalent to a cooing of 1˚C given by 

T = 16.5 - 4.3(delta) + 0.14(delta)^2

There are differences in the value of δ between the different ocean temperatures where any moisture had evaporated at the final place of precipitation. As a result the value has to be calibrated such that there are differences between say Greenland and Antarctica. This does result in some differences in the proxy temperature data based on ice core analysis, and Greenland seems to stand out, such as indicating a more dramatic Younger Dryas period (11,600 – 12,900) than other data.

An analysis of this data shows that the climate has varied cyclically throughout its history and is manifest of natural climate change. In particular what emerge out of the data are some interesting lessons about the recent history of planet Earth. Data shows the rapid oscillations of the climate temperature from the average temperature of today, indicative of glacial and interglacial periods. In particular, the data shows that during the Holocene period, beginning approximately 11,700 years before present, the temperature varied between 2-4 ˚C.

It is reasonable to assume that human civilisations under development will do better when the climate is kinder. This means that the warmer it is the better civilisations will do, and the colder it is, the harder the struggles. In particular we can expect that during the conditions of a colder climate that agricultural farming will suffer, and so there will be less food to go around, which will affect both life span and population expansion. To support this it is worth noting that the current epoch, the last 10,000 years has been the longest interglacial period for at least the last quarter of a million years and it is reasonable to therefore assume that this is one of the factors which has allowed human development from the emergence of the Neolithic period coming out of the last ice age.

Temperature proxy data from Greenland ice core samples of Oxygen isotope ratios.

Temperature proxy data from Greenland ice core samples of Oxygen isotope ratios.

The data also shows that there was a large global warming period known as the Eemian around 115,000 – 130,000 years ago. The average global temperatures were around 22 – 24 ˚C, compared to today where the average is around 14 ˚C. Forests grew as far north as the Arctic circle at 71˚ latitude and North Cape in Norway Oulu in Finland. For comparison North Cape today is now a tundra, where the physical growth of plants is limited to the low temperatures and small growing seasons. Given that Homo sapiens may have been here since around 300,000 years ago, this seems like a major opportunity for the development of human society from a people of hunter gatherers to one of agricultural developers and the development of a civil society.

There have been other interglacial periods that have resulted in global temperatures being either equivalent or above the average today, and the data shows temperature spikes of periods at around 200,000 years, 220,000 years, 240,000 years, 330,000 years and 410,000 years. Each of these interglacial periods will typically last at least 10,000 years.

Is it possible that these earlier periods in history allowed the opportunity for civilization to rise up and become sociologically and technologically advanced towards similar levels of today? The climate certainly seems to have allowed for it. The question is, did it happen?