Our experience of the Eclipse 2017

By Christine Ord

My husband David and I went to the US this August to experience our first total solar eclipse.

I am pleased to report that August 21st this year, the day of the total solar eclipse across the United States, dawned beautiful, hot and clear in Hendersonville, Tennessee. This is where David and I were booked into a hotel under the path of totality. In the days prior to the eclipse the weather reports were still reporting partially cloudy skies for the big day, so we were not certain as to whether we would be lucky and have a good view. The news reports were also advising that the roads were expected to be very busy as people got into their preferred locations for the eclipse. Based on these road forecasts, we decided to stay close to the hotel and not join one of the big events being advertised in nearby towns.

There was a lake and small park area just across the road from the hotel from where we decided to watch the eclipse, under a covered picnic area. Several people had set up telescopes and cameras with filters near the hotel. As the time approached for first contact (moon just beginning to cover the sun) a number of office workers came to the picnic area with lunch and drinks. They turned out to be the local council workers, plus the mayor who came out to watch the eclipse. They had such appropriate delicacies as ‘moon’pie, ’orbit’ gum, ‘milky way’ bars and ‘Sun’ gold drinks (as well as enormous sandwiches which didn’t have an event related name). We were invited to join them.

As the time drew near for the total eclipse, we had been watching the progress of the moon over the sun with the eclipse glasses over a period of about 1.5 hours. The weather held and we had a great clear view of the event for the 2 minutes 40 seconds of totality. Here are a few of our observations :-

  • It was surprising how much light the sun produced even when the majority of its surface was covered by the moon. If you didn’t have the eclipse glasses you wouldn’t know there was anything going on until the sun was completely covered.
  • As I was wearing glasses until the sun’s disc was covered, I didn’t see the first diamond ring effect. You really need to be watching naked eye at the last moment to catch it, as the glasses cut out too much light.
  • We didn’t see the Bailey’s beads effect, although one of the other observers who had seen several eclipses, said that they were not always in evidence as they can be affected by the relative positions of the sun, moon, and earth.
  • Totality lasted 2 minutes and 40 seconds and we could clearly see, naked eye, the white corona, which seemed to me to be wider round the equatorial plane of the sun and we could clearly see the red chromosphere.
  • The temperature dropped considerably during totality from the scorching heat of earlier and it was pleasant to stand out of the shade. Estimates given later on TV for the drop in temperature were between 8 and 10 degrees.
  • The sky didn’t go dark during totality. It was more a twilight effect, that time after sunset but before it really goes dark. It was however dark enough to see Venus and some of the bright stars, which was great.
  •  There weren’t many animals in the area so we couldn’t judge the effect on them very well. However, just as the sun was covered, the cicadas in the nearby trees started ‘singing’ and continued until totality was over and the sun was hot again. Some geese who were in the pond nearby got out of the water, but we don’t know whether they would have done that anyway.

The time of totality passed quickly and we clearly saw the diamond ring as the moon continued its journey and the sun gradually came back into view. It didn’t take long to get back to daylight and high temperatures, even though the sun was still partially covered for the next hour and a half.

We both thoroughly enjoyed the experience and were not disappointed. As with most observations of astronomical events, you are left wanting more. Perhaps next time a longer totality, a glimpse of bailey’s beads, a pair of binoculars to see some coronal detail ……

As we didn’t have a special filter for David’s camera he could only take pictures, of the Sun once it had been totally covered. He used a Canon EOS 80D camera.  We have included a few that he took of the event below.  Here are a few of the more unusual ones that other people took to take a look at.

This one was taken from a plane as the shadow of the moon fell on the ocean before making landfall.

https://apod.nasa.gov/apod/ap170901.html

The spaceweather website has a gallery of images taken around the world during the eclipse

http://spaceweathergallery.com/eclipse_gallery.html

Picture taken from a weather balloon showing the shadow of the moon across the earth.

http://spaceweather.com/archive.php?view=1&day=21&month=08&year=2017

This is NASA’s best collection, which includes some of the ISS passing in front of the Sun during the eclipse and some taken from the ISS itself.

https://petapixel.com/2017/08/22/nasas-best-photos-great-american-eclipse/

David’s Photos

    

Diamond ring effect as moon begins to move            Fellow observers

 

Total eclipse

Advertisements
Posted in Uncategorized

An amateur detecting exoplanets

By Mark Trapnell

Introduction

The detection of exoplanets – that is, planets orbiting stars other than our Sun – has rarely been out of the news since the first successful detection back in 1992.  A few months ago, headlines were full of items about the discovery of an exoplanet orbiting Proxima Centauri, possibly within that star’s “habitable zone” and so with consequences for life on a planet somewhere “near” to us.

I was fortunate to have studied a part-time course in Astronomy at University College London (UCL) over the last two years.  One of the course’s modules was on exoplanets and included detailed descriptions of the different methods by which exoplanets are actually detected by astronomers.  With one exception, those methods all require sophisticated systems which are way beyond the reach of even the most enthusiastic and well-heeled amateur.   The exception is detection through the transit method.

The transit method measures the apparent brightness (or flux) of a star over a period of time, looking out for changes in that brightness.  Regular small changes of a particular type and short duration may indicate the presence of a planet passing across the face of the star.  Periodic measurement of the star’s brightness should result in a light-curve which shows a pronounced dip during the time of the planet’s transit.

transit-diagram

(Source: AstronomyOnline.Org)

More details on all this can be found easily with a quick Google search.  NASA has a good website which includes a section on methods of exoplanet detection at https://exoplanets.nasa.gov.

My UCL course suggested to me that amateurs should be capable of detecting exoplanets using the transit method and I decided to have a go to see how easy or otherwise it might prove to do so.

Finding a previously undetected exoplanet is pretty much beyond an amateur.  Even if we assume (as now seems likely) that most stars have orbiting planets, the transit method relies on the planet’s orbit passing at some point between us and the star.  Because of all the possible orbits an exoplanet could have, the likelihood of that happening is about 1 in 100 for a large planet and about 1 in 200 for an Earth sized planet.  So I am quite happy to try to detect planets that are already known to orbit stars and to be detectable by the transit method!

My online research came up with a short book written by Dennis Conti which is aimed at amateurs and explains the mechanics of exoplanet detection.  It is free for download at http://astrodennis.com.  I would recommend it to anyone interested.  I think that anyone who is broadly competent at astrophotography should also have the ability to detect exoplanets.

Equipment

The main question for me was what equipment was likely to be necessary for a successful detection.  Some research suggested that a few amateurs have successfully detected exoplanets, but had almost invariably used large Schmidt-Cassegrain telescopes of upwards of 12 inches diameter, with 16 inches being normal. I prefer to use refractors.  The largest refractor I own is a 6 inch f7 apochromat.  I found little to suggest that exoplanets could be detected with such a small telescope.  However, I decided to give it a go.

Here is what I think is needed:

  • A telescope. Of course.  My experiences suggest that a 6 inch refractor will do just fine, though I can see that a larger aperture scope might have some advantages, particularly in choice of target.  However, for the amateur, larger aperture normally means greater weight and longer focal length, both of which challenge other parts of the system.  Actually, I think a 5 inch refractor would probably have been fine too.
  • A mount. In my case it was a German equatorial mount – a Paramount ME.  I think it has to be (within reason) a good quality mount which is robustly set up and well calibrated with excellent polar alignment, tracking and pointing accuracy.  Shortcomings with any of these will make the process much more difficult.
  • A camera. I think pretty much any CCD (not video) astro camera will do.  I admit I have no idea whether a DSLR would work. I have my doubts.  I also don’t know whether a one shot colour camera would work.  Again, I have my doubts.  I used an elderly SBIG ST10XME which has a KAF3200 chip in it.   Certainly, a large chip isn’t needed and in many ways is a disadvantage as it increases file sizes unnecessarily.  The longer download times of older cameras is also a disadvantage as it restricts the amount of data that can be gathered over a finite exoplanet transit time.  I stuck with the ST10 because I think it has the most linear CCD chip of my cameras.  The key to the camera is that it has to be operated within its linear range (i.e. doubling the amount of photons hitting the detector doubles its output signal rather increasing it by some other random amount).  More on this below.

I am very lucky to have a permanent small observatory near Benidoleig.  Based on my experiences, I don’t think this is necessary, but it certainly helps.

Software

Finding the target star, controlling the mount and acquiring images all needs software.  For this I used Software Bisque’s The SkyX Professional.  Other programmes will work equally well, I am sure.

What I found so helpful as to be close to essential (for me at least) was the additional use of a plate-solving programme.   For example, if an image I have taken of a star field is loaded into The SkyX, the programme will compare the picture to its star database and when it finds a good match, will place and show that image at the right location in its planetarium programme.  Other programmes do the same thing using online data resources.

Plate-solving of this kind is useful for several reasons: (i) finding the right star in the first place.  The target stars are not the stars we all know and love and can point to in the sky.  They are often unnamed or feature in an obscure catalogue and so can be quite hard to find in practice; (ii) pointing the telescope, once identified and shown in a planetarium programme, at the right star; and (iii) subsequently checking that data has indeed been acquired from the right star.

There are several databases of exoplanet hosting stars and the one I used (http://var2.astro.cz/ETD/) provides a 15 arc-minute square Deep Sky Survey (DSS) image of the target star and its immediate surrounds.  I used The SkyX to plate-solve the DSS image so as to show me where in the sky the star is located.  In fact The SkyX also then allows accurate movement of the telescope to the exact location of the DSS image.  I was then also able to take a test exposure, plate-solve the resulting image and check that it coincided with the DSS image.  These kinds of advances in relatively mainstream software now enable amateur astronomers to achieve far more than would have been possible only a few years ago.

Once the data has been acquired, the resulting images need to be calibrated and aligned.  There are a number of programmes that will do this.  I used CCDStack.  See further later.

Finally, the target star’s brightness needs to be measured and the results plotted in a graph.  Again, I am sure there are any number of available and suitable programmes.  I had already used AstroImageJ during my studies at UCL.  It is a popular and powerful photometry programme and is available as freeware at http://www.astro.louisville.edu/software/astroimagej/.  It isn’t a particularly easy programme to use.  I think it is designed primarily for professional scientists.   However, following step by step instructions, after trial and plenty of errors over quite a few hours as I learned how the programme works, resulted in the output of a recognisable light curve.

Gathering Data

My first attempt wasn’t too successful.  I was in a hurry having arrived at my observatory only a couple of hours before the predicted exoplanet transit.  As a result I made a mistake and set an exposure time for the camera that was too long and resulted in some of the camera’s pixels containing the target star image being saturated, something I didn’t discover until I examined the image data the next morning.  The moment they are saturated, pixels are no longer linear and so do not react proportionately to an increase or decrease in signal.  Indeed, I suspect they stop being linear some time before they saturate.

The following day I tried again on a different (and dimmer) target star.  This time I was much more careful with exposure times and worried instead whether I had gathered sufficient signal to overcome background noise.  As it turned out I had, though I suspect if I had increased exposure time slightly I might have had a cleaner light curve.

Here, broadly, are the steps I took:

  • Identify a suitable target star. The exoplanet database I mentioned above is great.  It allows the input of the user’s longitude and latitude and will then show a list of stars with predicted exoplanet transits for any given night together with data about sky coordinates, the predicted beginning and end of the transit and the likely fluctuation in measured brightness of the star during the transit.  The key is to select a target star that shows a significant change in brightness.  The star I chose (Wasp-52, named after the Wasp programme that first detected the exoplanet orbiting around it) would apparently show in excess of a 2% change when its exoplanet (Wasp-52b) transits.  For some reason, the naming convention is that the first detected exoplanet associated with a particular star is given a “b” identifier, not an “a”!
  • Check the position of the target star in the sky before and after the transit. Ideally it should be as close to the zenith as possible so as to mitigate atmospheric disturbance to the images.  One disadvantage of a GEM is that the telescope and imaging equipment rotate around the telescope’s axis which is aimed at the pole star.  It is inclined at the angle of the astronomer’s latitude.  They rotate continuously while tracking a star and so soon after passing the meridian, the telescope may hit the ground or the pier that the mount sits on.  To avoid this, GEMs have to reverse themselves around the meridian and perform a so-called “meridian flip”.  For a GEM user, I assumed that ideally the target star should not cross the meridian during the transit measurement period so as to avoid the need for a meridian flip, which would involve reacquiring and centering the target star after the flip as well as causing other practical complications which at best would result in an interruption of at least several minutes in data gathering run.  Wasp-52 crossed the meridian about 45 minutes before the exoplanet transit was due to start.  The data run should ideally start an hour before transit and end an hour after transit so as to give a proper light curve.  Meridian crossing meant that I would not be able to start until about 45 minutes before the predicted transit, which turned out not to be a problem.
  • Focus the camera. Focus isn’t highly critical.  Indeed, some techniques utilise deliberate defocusing to assist in smoothing out scintillation effects in the star image.  This seems to be particularly useful if the target star only covers a few pixels on the detector.  In my case, the star covered at least 10 pixels, so I didn’t bother defocusing and focused normally.  It is worth noting that the data gathering run lasts several hours – nearly four hours in my case.  I can see that a substantial temperature swing might cause focus problems, but I was lucky and the temperature dropped by only one degree during my data run.
  • Plate-solve the DSS Wasp-52 image and move the telescope to the indicated position on the planetarium programme (in my case, The SkyX).
  • Start auto-guiding. Auto-guiding takes images of a guide star every few seconds.  The software detects any movement of the guide star on the CCD detector and communicates correction instructions to the mount.  It is important to keep the star in pretty much the same position on the camera’s CCD chip during the entire transit run.  This is because otherwise variations in the brightness of the image field could produce false results.  This can be addressed to an extent by dark and flat-field subtraction, but research sources were all clear that it is better to start by eliminating movement on the chip as much as possible.  I guided at 5 second intervals and achieved accuracy of  about +/- 0.2 pixels through the data run.
  • Start taking images. I set my exposure time at 25 seconds for each frame.  The data run would last about 3 hours 15 minutes.  So allowing for download times, that resulted in  about 330 images!  Here is one of my images, indicating the position of Wasp-52.  Note how the star in the top right has saturated pixels, which give rise to “blooming” in this camera.  This is the problem I had the night before with my target star, but in this case it had no effect on Wasp-52 which was not saturated.

picture-of-target-star

  • The next morning (the data run finished at about 3 am), calibrate all the images with darks and flats taken the previous evening. A dark is an image taken at the same temperature (CCDs are typically cooled to reduce noise, mine was running at -10 degrees C) and same exposure length as the main image, but with no light falling on the detector.  It results in an image of the detector’s inherent noise which can then be subtracted from the actual image.  Flats are images taken of an even (flat) light source (for example an illuminated white screen or early evening sky) which model the light variations inside the telescope and imaging equipment.  The flat is then divided into the actual images to compensate for those variations.  The sources are adamant that dark and flat calibrations need to be done very carefully, so I combined 30 separate darks and 30 separate flats to make respective masters and then used those masters for calibration of the 330 images in CCDStack.  My laptop couldn’t cope with calibrating all 330 images at once – I had to batch process 50 images at a time.  Even so, its not a huge task with modern programmes and the whole process probably took me no more than about three or four hours.  If the camera moved between exposures, it would also be necessary to register (align) all the images so that the next stage, photometry, would work properly.  Given my guiding results, I decided not to bother with registration as the 330 images were close to perfectly aligned already.
  • Load the calibrated images into AstroImageJ and press “Go”! Well it isn’t quite that simple, but once the programme is properly configured with the coordinates of the target star, it is pretty much like that.  The main other input is first to load one of the images into the programme and identify the target star as well as several other comparator stars in the same image by placing a pre-defined measurement “annulus” over each star.  The annulus measures the star’s brightness as well as that of the sky background to determine the true brightness of the star itself.

picture-of-calibration

The programme automatically assumes that the first star chosen (T1) is the target star and that the others (C2-6) are comparator stars.  The idea is that the brightness of the target star (Wasp-52) is measured for each frame, as is that of each comparator star, which hopefully doesn’t have an orbiting planet affecting its brightness and isn’t a variable star, an orbiting binary or anything else that might cause signal fluctuations.  That way, if the target and the comparator both vary at the same time in brightness by proportionate amounts, that variation can be ignored as resulting from external variations such as clouds passing across the target.  If in any given image the target varies but the comparator does not, the programme registers the variation as it must have a source inherent to the target star.  The literature does discuss the choice of comparator stars and checking beforehand that they aren’t variables or binaries.  I just used a pin and assumed that if one of my five comparators wasn’t good, that would show in its data compared to the others and I could eliminate it.  Not very scientific, but it seemed to work.

Once “Go” has been pressed, the programme analyses each image identifying variations and starts creating the light curve, dot by dot. It is a great experience to see the dots start appearing, and even better when they begin to head downwards in a curve indicating a successful exoplanet detection and then later head back upwards as the transit ends.  Here is the curve I generated from my 330 images:

picture-of-result-graph

The top (horizontal) row of blue dots clearly shows the Wasp-52b exoplanet light curve.  The two horizontal rows of magenta and orange dots underneath are the plots of two of the comparator stars.   I actually used five comparators, but removed the other three from the measurement plot so it isn’t too cluttered.  It can be seen from the vertical axis that there was an almost 3% dip in brightness of the target star (from 1.01 down to 0.98 on the relative flux scale) during the transit and that the transit lasted nearly two hours.  All exactly as predicted!  The radius of the host star is known from its spectral type and so the amount of the dip in the light curve allows calculation of the radius of the planet, which turns out to be about 1.3 x the radius of Jupiter.  The period of the orbit is only a few days, so the orbit of Wasp-52b is clearly close to its star, making it a “hot Jupiter”, the easiest type of exoplanet to detect.

The quality of the curve isn’t bad, considering this was a first effort.   The right hand side deteriorates in quality compared to the left hand side, but its trend is still very clear.  The change in quality affects the target star and the comparators and I guess is due to the stars dropping in altitude during the imaging run and so becoming increasingly affected by atmosphere instability.

Conclusion

I proved to myself that a keen amateur with reasonably normal equipment and some CCD imaging experience can successfully detect an exoplanet from a back yard with a small telescope.   It also turned out not to be that difficult, though admittedly I chose as easy a target as I could.

The sense of achievement as the planet’s light curve gradually appeared on my computer screen was immense.  This felt like proper science!

The success is of course primarily attributable to the quality of equipment and software now available to amateurs at a reasonable cost and the immediate availability of help and information on the internet.

I am not sure what if anything I will do with that new found ability.  In a  few months time, I suppose I may have another go and pick a more difficult target.  It seems that amateurs do have a role in exoplanet detection by providing further data in respect of planets already discovered where the professional community has moved on to other targets.  But for the moment, I am happy to leave the fun to astronomers and their telescopes in Mauna Kea, La Silla or up in space.

Posted in Uncategorized

Experimenting with a star analyser

Experimenting with a Star Analyser (200 lines/mm)

By Peter Gudegon

The Star Analyser looks like an ordinary 1.25″ glass filter, but is a diffraction grating that splits-up the light coming from a star into its various colours. Many stars show absorption lines in their spectra where they have certain colours missing, while some others may have bright spots on their spectra where they emit most of their light at particular wavelengths. Suddenly each of those otherwise bland normal stars start to reveal their own individual identity…. some people refer to it as looking at a star’s fingerprint.

The analyser can be used in a variety of ways:-

  • You don’t need a telescope. If you have a camera (preferable a DSLR) you can mount the star analyser in-front of the camera lens, either by mounting it on a piece of black card, or by buying a special filter holder. This limits the lens aperture size, but is fine for bright stars.
  • Those with a telescope that uses standard 1.25″ dia eyepieces can simply screw it into the eyepiece as you would a normal filter, then observe the star and it’s spectra through the eyepiece as normal. Additional spacers between the analyser and eyepiece can be used to increase the spread of the spectra, making it easier to see any detail.
  • But the most versatile way of using it simply, is with a telescope and camera (with no lens) and mounting the analyser just in front of the camera (most telescope/camera adapters already have the thread there for mounting filters).

The cost of these analysers is around €130-155 (August 2016) and they come in two versions:- 1. A 100 lines/mm (this is the standard type usually recommended), B. A 200 lines/mm, intended more for work with CCD cameras, and has a lower profile to allow it to fit inside filter-wheels.

What makes these so good is that they use a “blazed” grating. A normal diffraction grating has most of the light passing straight through it, with a relative low percentage of light being directed into 1 of several orders of diffraction, creating a rather dim image. A “blazed” grating uses a method that increases the amount of light being directed to one particular order of diffraction, in some designs up to 70% of the incident light may appear in the preferred diffraction.

For my own use, I used a (large) normal camera lens in place of a telescope (Sigma 150-500mm zoom), then a slim EOS lens to T2 adapter, followed by a Filter-Drawer (in which sits the star analyser), in front of a QHY9 (monochrome) CCD camera.

Unlike a normal (and very expensive) spectroscope that passes the light through a very narrow slit, this relies on the star image being as small and stable as possible (ideally a stationary point source). Too high a magnification (with a telescope) amplifies any movement of the star due to the atmosphere, which greatly reduces the resolution of the end result.
Although a zoom lens might be frowned upon by most astronomers, on a mount that has to be erected/dismantled every night it provides a very neat, portable solution, and makes initial alignment of the mount very easy before zooming in to use the full diameter of the objective. The relative low magnification means the motorised drive easily allows exposures of 30 seconds without the need to set up an auto-guide, and to my surprise has already allowed me to record spectra from stars down to magnitude 10.
After a quick test on some of the stars in Lyra, what I really wanted to try this on are some Wolf-Rayet stars. These stars have very strong emission lines, but unfortunately they are quite rare, and in the Northern hemisphere the best known/easiest to observe are a group in Cygnus. However they are all quite faint, the brightest being magnitude 6.7 (ie. below naked-eye visibility).

Below is a highly enlarged part of a picture showing the star WR136 on the left and on the right its resulting spectra, in which you can clearly see some bright spots.

Spectra WR136

But to analyse the results you really need to turn the spectra image into a graph and calibrate it (which turns out to be much easier than it sounds). Using RSpec to do this created the following graph. Although determining the significance of each line is where it becomes interesting…..and involves a lot of “Googling”.

Spectra Graph of spectra for WR136

Next another couple Wolf-Rayet stars, this time WR135 and WR137 which are known as Carbon rich stars….The difference in their spectra from the above is immediately obvious. These type of WR stars are known for their strong C [III] & C[IV] lines at 5690-5820 Angstroms. But I was surprised that with this relatively simple set-up I could look at these two stars, about magnitude 8.5, and even record the differing amplitude of the C [III] line and that they are noticeably narrower for WR135, which distinguishes it as a spectral type WC8, compared to the spectral type WC7 of WR137.

Spectra Graph of spectra for WR135

 

Spectra Graph of spectra for WR137

 

 

Posted in Uncategorized

Cubesats and new bright night-time object

By David A. Ord

…………….the debate on the subject of Cubesats is warming up

Three student-built CubeSats were launched into space On April 25th on Soyuz flight VS14 from Europe’s spaceport in Kourou, French Guiana. The CubeSats hitched a lift with the launch of the European Earth monitoring satellite Sentinel -1B. The launch is a part of ESA’s outreach program ‘Fly your Satellite’, set up to encourage education in space technology. They all successfully called home and are currently in low Earth orbit.

CubeSat is a term measuring a satellite’s approximate size and mass. A 1-unit cubeSat is a 10-centimeter cube weighing about 1 kilogram. Most launched so far are in the 1- to 3-unit size, but the industry is expanding so rapidly that these early trends may not endure.

Cubesat

Artist’s impression of a CubeSat in orbit

In the last decade, CubeSats have gone from curious toys to capable tools. Advances in technology have expanded their capabilities in areas as diverse as imaging the Earth, studying space weather and even military interest. They are attracting great interest from scientists and venture capitalists alike.

All CubeSats are in low Earth orbit, but NASA announced that it will send a pair of CubeSats on their first interplanetary mission with InSight, its next mission to Mars. The pair of tiny satellites will enter a 3,500 Km orbit of the red planet and provide a communication relay for the Insight orbiter. NASA sees this project as a test for CubeSat technology and with the traffic currently orbiting Mars, there will be no shortage of alternative communication paths, should the worst happen to the CubeSat.

cubsat communication relay

CubeSats will provide a communication relay for the Mars Insight mission

To date, in excess of 450 CubeSats are known to have been launched, with many more waiting to hitch a ride on a host launcher. There are also probably quite a few which have never been fully documented.

Most CubeSats have no on-board propulsion. They are generally obliged to take such launch opportunities as are available to them. They are typically a ‘secondary’ payload and must accept whatever orbit is required for the rocket’s main customer.

Having to hitch a ride can mean accepting a launch to an orbit in which their spacecraft will remain for many decades, long after their operational lives of two years or so. Many do not include a de-orbiting strategy. And therein begins the problem.

Satellites and debris are monitored by the US Air Force Joint Space Operations Center (JSpOC) and CubeSats in their atomic form—10 centimeters on a side—are near the lower limit of what can easily be tracked. Even larger CubeSats still pose difficulties in obtaining precise positions, resulting in greater uncertainties.

JSpOC is the body which provides information to satellite operators to avoid collisions with debris and provides the data to ensure the safety of astronauts aboard the ISS. Several times, the ISS has had to be moved out of the path of debris and on two or three occasions, the astronauts have been ordered to take refuge in the Soyuz spacecraft ‘lifeboat’ for additional protection.

No wonder then that there are those in the industry who derisively refer to CubeSats as ‘debris Sats’. The more hardware there is in space, the greater the chance of collisions. To mitigate these risks, CubeSats are supposed to come down within 25 years. However, there is no enforcement of this rule. NASA claimed that by the end of 2014, 1 in 5 of US originating and over a third of all non-US CubeSats are in direct contravention of the 25 year ‘guideline’.

Even if everyone complies, operational issues arise to increase the risk of collision. The space station, for instance, will release a lot of CubeSats at the same time. These CubeSats will come off in this cloud, and the JSpOC is trying to track them and add them to their catalog of objects, so that other satellites can avoid them.

The problem with this “cloud” of satellites is that it can take up to a week for JSpOC to figure out which satellite is which and add them to their catalogue. Other spacecraft cannot take action against them because their position is not known. There is always this time lag after launch of a CubeSat, or deployment of a CubeSat, when other objects can’t be protected.”

Based on a prediction that CubeSat launches will exceed 200 per annum and at that volume, some argue that the risk is too great. Because of the much greater uncertainties in the positions of the CubeSats, a simulation found that the collision risk posed by the CubeSats was 30 times greater than for a single satellite.

Worryingly, that simulation has already proved accurate in one instance. It predicted CubeSat collisions should have started in the 2013 to 2014 period and, sure enough, the first one happened in May 2013. It resulted in the loss of Ecuador’s first CubeSat, NEE-01 Pegaso.

The follow on to accidental collision opens a huge can of worms labelled Space Law. There are few precedents, no laws, multi billions of dollars at stake – perfect, fertile ground for the legal profession.

Who can launch want and into what orbit? Should there be enforceable laws?

So, it is against this backdrop that a Russian project comes to the fore. As stated in an earlier story; as of January of this year, Roscosmos, the Russian equivalent of NASA became a ‘private’ commercial entity. This was to better enable it to sell more RD-180 rocket engines to the Americans without them seeming to come from the ‘state’ – with which the Americans still have an embargo.

If there was any doubt about the switch from communism to commercialism, Roscosmos confirmed a project to be paid for by the Russian ‘crowd funding’ organisation ‘Boomstarter’. The project is called Mayak, meaning Beacon and some 1.7 million roubles has been raised by this novel funding method. What does the crowd want for its money – an orbiting mirror to reflect the Sun as a memorial to the history and tradition of Russian achievements in space!

boomstarter beacon

Reflecting the Sun and new bright object in the sky

The 16 metre tetrahedron shaped reflector is a possible launch later this year on a Soyuz-2 rocket and will take its place in a low Earth orbit. It will bounce back the sun’s rays to Earth as it orbits, making it brighter than any star in the night sky.

So later this year when you look up at the brightest star you can see in the sky, you can say (to paraphrase John McEnroe) ‘You cannot be Sirius’ – Seriously?.

Posted in Uncategorized

Are underground lunar settlements viable?

By Frank Bonner

When looking at permanent settlements on the Moon there are essentially two options:

building on the lunar surface, or looking for a pre-existing underground chamber which can be made airtight.

In recent years there has been growing interest in the second approach. In particular the search has been on for suitable lava tubes which might be of substantial size in the areas of interest for a permanent settlement. There are a number of good reasons for this. It would mean that the need to transport material to the site for building purposes would be much reduced.

An underground location is likely to be inherently safer against both meteor strikes and solar and cosmic radiation than a surface site. Making an appropriate lava tube habitable may involve little more than building an airtight door at the entrance to the tube. A sufficiently large lava tube would allow for expanding the settlement at little more cost than transporting people and their immediate technological needs there in the first place.

It is no surprise therefore that a lot of effort has gone into identifying suitable lava tubes. These lava tubes are fairly ancient structures left over from a time when the Moon was geologically active. As the name implies they transported molten lava from the Moon’s interior to the surface and were key to forming the great lava plains of the Moon’s Seas (Maria). The entrance to these lava tubes would show up as pits on the lunar surface and a number of these have been identified from photography of the Moon’s surface.

Maruis hills lava tube

The photo shows a pit, initially discovered by the Japanese Kaguya spacecraft which is believed to be the entrance to a lava tube 65 metres wide and 80 metres deep in an area of the Moon known as the Marius Hills.

The Lunar Reconnaissance Orbiter has photographed over 200 of these pits which show signs of leading into lava tubes. Current best estimates say that these pits lead to chambers with diameters from 5 to 900 metres although there is no reason, in principle why some of these caverns could not measure several kilometres across – large enough to encompass a whole city.

The problem to date is that all of this has been inferred from looking at the surface of the Moon and its sinuous rilles in particular. There has been no hard proof of the existence of these chambers.

Developing better information on where these lava tubes are and their size and whether they exist at all has become an important piece of research and data from the GRAIL mission is starting to shine a clearer light on these issues.

The Grail satellites collected detailed data on the gravity variations on the Moon.

gravity varioations on the moon

The photo shows that there are quite wide variations in gravity across the Moon’s surface.

This information allowed a team of scientists at the Aeronautics and Astronautics Department of Purdue University in Indiana, led by Rohan Sood to get a better idea of what the Moon’s interior looks like and, in particular, the expected buried lava tubes.

Changes in the Moon’s gravity reflect changes in the amount and density of the material in the area being measured. In the area of the Marius Hills where the hole shown above is located they found the signature of a subsurface cavity. They also found signatures for at least a further ten features that could be lava tubes, spread across the surface of the Moon and close to ancient volcanic seas. Some of these signatures indicate tubes measuring more than 100 Km’s long and several Km’s wide.

The team announced their results in a paper – Detection of Buried Empty Lunar Lava Tubes Using Grail Gravity Data which they presented to the 47th Lunar and Planetary Science Conference in Texas. As they point out in their paper this is not a definitive proof of the existence of these lava tubes because of a mismatch in the size of the tubes versus the Grail data.

It is however a strong indication of their existence and points to some of them at least being far larger than had previously been thought. In the end it is likely to take a mission with ground penetrating radar to finally confirm the existence or otherwise of these lava tubes, something that might be possible with the upcoming LAROSS mission.

It is certainly something which is exciting the proponents of a subsurface lunar settlement.

Posted in Uncategorized

Gravitational waves detected announcement 11th Feb 2016

At a press conference at LIGO today Thursday 11 February 2016 scientists have announced that they have definitely detected gravitational waves.

The signal was detected on 12 September 2015 and was caused by two colliding black holes. The signal is said to exactly match the predictions made by Einstein’s General Theory of Relativity. The Black Holes concerned are about 30 solar masses each with a diameter of 150 kms and are located at a distance of 1.3 billion light years. As well as confirming gravitational waves the discovery is proof that binary black hole systems can exist.

At the time of collision the black holes had a velocity of about half the speed of light. 60 solar masses of material crashing at this speed are certainly gong to release plenty of energy.

These gravitational waves have been travelling for 1.3 billion years before they reached Earth. The effect of the fantastic release of energy all that time and distance ago was to move the LIGO machinery by 1/1000 of the width of a proton particle.

The detection of gravitational waves is being likened to the Apollo moon landings of the 1960’s. As a result of the detection it is being predicted that we will now go on to see things that we never knew existed.

The team feel confident that they have definitely seen gravitational waves because the signal turned up in two detectors 7 milliseconds apart which matches the delay expected since the detectors are located in different places.

We will update this report at our March meeting.

Posted in Uncategorized

Astrofest 2016 Saturday afternoon

Saturday afternoon started with Andrew Pontzen of University College London asking Does Dark Matter Exist? and going on to say why he thinks it does. Andrew laid out the main reasons to think that 5/6th of the mass of the Universe is invisible. In no particular order:

  • there is not enough visible mass in galaxies to stop them flying apart.
  • The gravitational lensing of distant galaxies needs 5 times more mass in the foreground galaxies than we can see.
  • Where galaxies collide there is something which passes through without interacting with all the normal matter.

In each of these cases there needs to be 5 times more matter than we can see to explain what is happening and it is this consistency which points to us being on the right track.

The current theory is that Dark Matter is a particle which is invisible and untouchable. Initially this seems strange but if you consider that ordinary matter is only visible and touchable because of electromagnetism it becomes clear that Dark Matter may be a particle which does not feel the force of electromagnetism. We already know of particles which do not feel electromagnetism, for example the neutrino. So if it does not feel electromagnetism which of the other three forces does it feel. The current view is:

  • It does feel gravity.
  • It may feel the weak force.
  • It cannot feel the strong force because if it did pretty much everything would be radioactive.

So far all attempts to find Dark Matter have failed. So are we looking for the wrong thing. Andrew’s view is not. He points to the fact that Einsteins Field equations match exactly what Planck found from the cosmic microwave background. He points to this graph as confirmation, in his view that we are looking in the right place.

The theory matches the observations. All we need to do now is find the stuff!

Following this Hugh Hudson from the University of Glasgow outlined a project to crowd source images from next year’s solar eclipse to produce a mega movie of the solar eclipse which would allow for study of the solar corona in unprecedented detail. Google will contribute the computer grunt whilst work is underway to develop a smartphone app which would help undertake this work. There is a lot yet to be done but it looks like an interesting project . More detail can be found on their website at http://www.eclipsemegamovie.org

The next speaker was Lewis Dartnell from the University of Kent on the subject of What Makes a Habitable Planet. Lewis started by talking about what we mean by life. He pointed out that on Earth extremeophiles live in the most hostile of conditions. In fact extremeophiles can be found living in the conditions that we find on Mars Venus and Europa. So we already know that the conditions on those worlds can support life although whether they do is a different question.

Whilst we do not really know the conditions in which life originally evolved on Earth we now have a very good idea of the conditions in which it can survive. But when we look for worlds which may have life another factor is being able to support that life over very long periods to allow it to evolve. So, for instance, when life was first evolving on Earth, Mars was actually a very similar planet. Although it was on the outer edge of the habitable zone it had a thick atmosphere and flowing water on its surface. However because it was much smaller it lost its magnetic field quite quickly and with that gone the solar wind stripped away it atmosphere and as a result it lost its flowing water. So conditions not only have to be right they also have to stay stable over geological timescales.

Lewis finished by saying he thinks we are on the brink of not only finding one Earth twin but that we will find many over the coming years and that the missions like Twinkle and its successors will be able to tell us if life exists on these planets through the spectroscopic analysis of their atmospheres.

The final talk of the day came from John Spencer on some of the early results from the New Horizons mission to Pluto. We have covered this fairly thoroughly in recent meetings and there was really no new material covered in John’s talk so I will not cover it at any length here although some of the high resolution images he had to show were quite stunning.

With that Astrofest ended for another year.

Posted in Uncategorized