What's new

Micro stories - small news bits too small to have their own thread

NASA Wants Your Help Figuring Out How to Build Space Habitats

1256577392336998433.jpg


If and when we send colonists to Mars and beyond, we’re going to need habitats unlike any we’ve built before. To encourage out-of-the box thinking, NASA and America Makes are kicking off a $2.25 million dollar competition to design and build 3D printed space habitats.

One of the biggest barriers to the construction of a space colony is shoring up the money to ship the building materials we’ll need. Since it currently costs roughly 10 grand to blast a pound of anything off our planet, scientists, engineers and entrepreneurs have been asking how we might get away with less cargo. Indeed, that’s one of the main thrusts behind asteroid mining, which could offer spacefaring humans a bountiful supply of water and metals.

Other technologies that take advantage of in situ resources have been discussed in the context of a Martian habitat, but so far, the focus has been on how we might get enough water to drink and oxygen to breathe. While these vital pieces of the puzzle, we could save ourselves a lot of money and effort if we were able to manufacture infrastructure using indigenous materials, as well.

Which is where the new competition comes in. While some money is being offered for just plain awesome architectural concepts, the lion’s share of the prize pot focuses on the 3D printing technologies needed to fabricate infrastructure from in situ materials and recyclables.

NASA breaks it down for us:

The first phase of the competition, announced Saturday at the Bay Area Maker Faire in San Mateo, California, runs through Sept. 27. This phase, a design competition, calls on participants to develop state-of-the-art architectural concepts that take advantage of the unique capabilities 3-D printing offers. The top 30 submissions will be judged and a prize purse of $50,000 will be awarded at the 2015 World Maker Faire in New York.

The second phase of the competition is divided into two levels. The Structural Member Competition (Level 1) focuses on the fabrication technologies needed to manufacture structural components from a combination of indigenous materials and recyclables, or indigenous materials alone. The On-Site Habitat Competition (Level 2) challenges competitors to fabricate full-scale habitats using indigenous materials or indigenous materials combined with recyclables. Both levels open for registration Sept. 26, and each carries a $1.1 million prize.


So, if you’ve always thought you had a brilliant idea for how to build a Martian city or a deep space generation ship, now’s your chance to find out. Worst case scenario, you come up with a cool concept. Best case, you become the architect or engineer behind humanity’s first outer space colony, with generations of Martians and Alpha Centaurians singing your praises. Doesn’t sound too bad either way.
 
New Earth-Orbiting Microwave Gun is Making Killer Maps of Wind Dynamics

1256661181228230035.jpg


On May 10th, tropical storm Ana—the first named storm of this year’s North Atlantic hurricane season—made landfall along the Carolina coast. NASA scientists took the opportunity to observe the storm’s wind dynamics with one of their newest toys and produced this spectacular wind map while they were at it.

The Rapid Scatterometer joined the rest of NASA’s Earth Observing fleet on the International Space Station last September. RapidScat is basically a giant microwave gun that sends pulses of radiation to the ocean’s surface, which then bounce back toward the instrument’s sensor. Choppy waters relay a more powerful signal that quiet waters, information which RapidScat uses to determine wind speed and direction.

According to NASA:

The image above was produced with data acquired by RapidScat as Ana approached the coast on the afternoon of May 8, 2015. Arrows represent the direction of near-surface winds. Shades of blue indicate the range of wind speeds (lighter blue and green represent faster-moving winds). The image below, acquired with the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite, shows a natural-color view of the same storm as it appeared on the morning of May 9.

1256661181306651283.jpg


Hurricane season is just getting started, and we can expect plenty more cool NASA images as things kick into high gear.
 
IBM demos first fully integrated monolithic silicon photonics chip | Ars Technica UK

IBM demos first fully integrated monolithic silicon photonics chip
Electro-optical chips could bring big bandwidth gains and lower power consumption.

17342198420_179a6d23de_k-640x537.jpg


At a conference in the US, IBM has demonstrated what it claims to be the first fully integrated wavelength multiplexed silicon photonics chip. This is a big step towards commercial computer chips that support both electrical and optical circuits on the same chip package, and ultimately the same die. Optical interconnects and networks can offer much higher bandwidth than their copper counterparts, while consuming less energy—two factors that are rather beneficial as the Internet grows and centralised computing resources continue to swell.

Engineers have long known that fibre-optic links are more desirable than copper wires for shuttling data around—the available bandwidth is higher, the distances that signals can be squirted over are longer, and energy consumption is lower. On the other hand, when it comes to actually doing stuff with that data, electronics are where it's at. This dichotomy has resulted in a very pronounced split between optical and electrical technologies: optics are used for networking between computers, but inside the chassis it's electronics all the way.

This approach has worked well so far, but as bandwidth and energy requirements continue to soar, research labs around the world have been looking at ways of bringing the optics ever closer to the electronics. The first step is to bring optical channels onto the motherboard, then onto the chip package, and ultimately onto the die so that electrical and optical pathways run side-by-side at a nanometer scale.

ibm-silicon-photonics-multiplexing-diagram-640x399.jpg


IBM's latest nanophotonic chip belongs to the second category: it can be placed on the same package as an electronic chip, bringing the electro-optical conversion a lot closer to the logic. It's important to note that the lasers themselves are still being produced off-chip, and brought into the nanophotonic chip through the "laser input ports" that you can see in the diagram above. Once the chip has been fed some lasers, there are four receive and transmit ports, each capable of transporting data at 25 gigabits per second, which are bundled up into 100Gbps channels via wavelength multiplexing.

That's just this chip, though; IBM says that, in theory, its technology could allow for chips with up to eight channels. 800Gbps from a single optical transceiver would be pretty impressive.

SiliconPhotonics2-640x400.jpg


For now, IBM is targeting its silicon photonics technology at data centre and HPC settings, where bandwidth can be a bottleneck. IBM says it has successfully demoed its new photonic chips in a "datacenter interconnect" setup that could push 100Gbps over a range of up to 2 kilometres (1.24 miles). If IBM can produce nanophotonic transceivers capable of 800Gbps—and confirm there actually is a significant reduction in power consumption from moving the photonics closer to the electronics—then the company's technology could compare very favourably against standards such as 40Gbps and 100Gbps Ethernet, and the being-discussed 400Gbps/1Tbps Ethernet standard.

The next step, according to IBM, is to get the lasers on-package using III-V semiconductors. From there, next step (which won't happen quickly) will be to get the lasers, waveguides, photodiodes, and other optical gubbins right onto the processor die itself, alongside the copper wires and transistors.

One the most impressive facets of IBM's new chip is that it's fabricated on a fairly standard 90nm CMOS process. One of the larger barriers to the adoption of electro-optical computing, or indeed any novel method of computing, is whether it slots tidily into existing manufacturing processes: when you're a company like Intel with billions of dollars sunk into capital equipment, you ideally want to stick to tools, materials, and processes that you already know a lot about. If IBM's nanophotonic technology wasn't built on a monolithic CMOS process, the odds of it being commercialised would be much lower.
 
Look at the Tiny Earthquakes Scientists Make to Predict Real Ones

1257481198376385381.jpg


This photo, captured through a polarizing filter, shows the buildup of stress along a modeled fault line at Los Alamos National Laboratory, where a team of scientists is trying to figure out how to forecast earthquakes.

The artificial fracture was created by sliding two semi-rigid plastic plates against each other, with a layer of small nylon cylinders between them. Take a closer look at the experimental setup:

1257481198628582245.jpg


By simulating the movement of Earth’s tectonic plates, scientists represent the structure and dynamics of geological faults and demonstrate a mechanism by which earthquakes can influence the triggering of other earthquakes. Here is a brief explanation of the experiments:

 
This Newly-Discovered Class of Galaxies Shouldn't Exist

c3d5okfz0qswy58ajb65.png


When you spot a galaxy in a telescope, you know it — galaxies are bright, dense collections of millions of stars, often in a spiral or orb shape, held tightly together by gravitational forces. But now scientists have discovered a new kind of galaxy, which they call “fluffy” and “wispy.” No one is sure how they’ve come to be.

Dragonfy 44, pictured above, is one of a handful of these new galaxies discovered using the Dragonfly Telephoto Array and the celebrated Keck I telescope on Hawaii’s Mauna Kea.

A team of astronomers compared data from the two telescopes, aimed at the Coma galaxy cluster. The Coma cluster is about 300 million light years away and can be seen in the constellation Coma Berenices (near Leo). Unlike the other galaxies in this cluster, however, the new galaxies are more like clouds. They’re as big as our own Milky Way (about 6o,000 light years across) but with only about one percent of the stars. Astronomers are calling them ultra diffuse galaxies.

Such galaxies shouldn’t exist, given what we know about galactic formation. So now we’ll have to revise our hypotheses. Already, astronomers are coming up with ideas. University of Toronto astronomer Roberto Abraham said in a release from the Keck Observatory:

The big challenge now is to figure out where these mysterious objects came from. Are they ‘failed galaxies’ that started off well and then ran out of gas? Were they once normal galaxies that got knocked around so much inside the Coma cluster that they puffed up? Or are they bits of galaxies that were pulled off and then got lost in space?

zp9xh1umytpvrsjqllit.png


Yale University astronomer Pieter van Dokkum, who led the study of ultra diffuse galaxies, says that the most remarkable thing is that these galaxies managed to survive without being torn apart. Said van Dokkum:

It’s remarkable they have survived at all. They are found in a dense, violent region of space filled with dark matter and galaxies whizzing around, so we think they must be cloaked in their own invisible dark matter ‘shields’ that are protecting them from this intergalactic assault.

Another team member, San Jose State University astronomer Aaron Romanowsky, speculated about what life would be like on a planet in an ultra diffuse galaxy:

If there are any aliens living on a planet in an ultra-diffuse galaxy, they would have no band of light across the sky, like our own Milky Way, to tell them they were living in a galaxy. The night sky would be much emptier of stars.

You can read the full scientific paper about these galaxies in the Astrophysical Journal.
 
This Newly-Discovered Class of Galaxies Shouldn't Exist

c3d5okfz0qswy58ajb65.png


When you spot a galaxy in a telescope, you know it — galaxies are bright, dense collections of millions of stars, often in a spiral or orb shape, held tightly together by gravitational forces. But now scientists have discovered a new kind of galaxy, which they call “fluffy” and “wispy.” No one is sure how they’ve come to be.

Dragonfy 44, pictured above, is one of a handful of these new galaxies discovered using the Dragonfly Telephoto Array and the celebrated Keck I telescope on Hawaii’s Mauna Kea.

A team of astronomers compared data from the two telescopes, aimed at the Coma galaxy cluster. The Coma cluster is about 300 million light years away and can be seen in the constellation Coma Berenices (near Leo). Unlike the other galaxies in this cluster, however, the new galaxies are more like clouds. They’re as big as our own Milky Way (about 6o,000 light years across) but with only about one percent of the stars. Astronomers are calling them ultra diffuse galaxies.

Such galaxies shouldn’t exist, given what we know about galactic formation. So now we’ll have to revise our hypotheses. Already, astronomers are coming up with ideas. University of Toronto astronomer Roberto Abraham said in a release from the Keck Observatory:

The big challenge now is to figure out where these mysterious objects came from. Are they ‘failed galaxies’ that started off well and then ran out of gas? Were they once normal galaxies that got knocked around so much inside the Coma cluster that they puffed up? Or are they bits of galaxies that were pulled off and then got lost in space?

zp9xh1umytpvrsjqllit.png


Yale University astronomer Pieter van Dokkum, who led the study of ultra diffuse galaxies, says that the most remarkable thing is that these galaxies managed to survive without being torn apart. Said van Dokkum:

It’s remarkable they have survived at all. They are found in a dense, violent region of space filled with dark matter and galaxies whizzing around, so we think they must be cloaked in their own invisible dark matter ‘shields’ that are protecting them from this intergalactic assault.

Another team member, San Jose State University astronomer Aaron Romanowsky, speculated about what life would be like on a planet in an ultra diffuse galaxy:

If there are any aliens living on a planet in an ultra-diffuse galaxy, they would have no band of light across the sky, like our own Milky Way, to tell them they were living in a galaxy. The night sky would be much emptier of stars.

You can read the full scientific paper about these galaxies in the Astrophysical Journal.

New findings rewrite old rules; render laws invalid. I love science:yahoo:.
 
How Cosmetics Companies Farm Human Skin to Test Their Products

1259101803527586629.png


As you slathered conditioner onto your noggin this morning, you probably weren’t thinking about lab-grown skin coins (if you were, r u ok?). But human skin grown in a lab is a booming business—and how it’s made is a little-known and fascinating story.

Yesterday, Bloomberg’s Caroline Winter brings us news of a new partnership between the tissue-printing company Organovo and beauty giant L’Oreal. Her post gives us an incredible glimpse into the world of skin farming, which L’Oreal has been doing for decades. “The company started farming derma back in the 1980s,” writes Winter nonchalantly. Hang on, what?

Though it sounds like something out of a horror movie, growing human tissue for testing purposes is a quite well-established industry. L’Oreal has been a pioneer in the practice, and it even runs a facility dedicated to it called the Predictive Evaluation Center, located in Lyon, France—along with a new lab in Shanghai.

The term “predictive evaluation” refers to testing how a new ingredient or product will affect human skin or eyes. That’s a job that is often—even today—relegated to lab animals, which are put through what amounts to torture in order to test the safety of new products. In the 1990s, L’Oreal started investing in producing skin to replace animal testing, and today, it’s completely free of the practice. Since 2008, the company says it’s tested 13,000 different products in its skin lab, from concealer to conditioner.


1259101803751661637.jpg


Right now, L’Oreal’s scientists grow these skin samples using left over mammary cells from plastic surgery. They’re carefully grown into tiny samples, which are used to test new products. The commercial name for this product is called Episkin, and L’Oreal actually sells samples of what it grows to other companies, too.

Here’s how The New York Times described the growth process inside the lab back in 2007:

To make Episkin, donor keratinocyte cells, collected after breast and abdominal plastic surgery, are cultured in tiny wells of collagen gel, immersed in water, amino acids and sugars, and then air-dried for 10 days or aged to mimic mature skin by exposure to UV light.

L’Oreal says it produces 130,000 tissue units every year, including skin but also epidermis and cornea—which is important when you’re testing products that come into contact with millions of eyes.

So, what happens once the tissue is grown? How are the products applied? A great 2012 story from The Telegraph about plastic surgery waste material has an account of the lab work being done using the manufactured skin:

On the day I visited, laboratory assistants were soberly applying pink hair conditioner on to trays of skin the size of a Polo mint, and as thin as a cigarette paper. In another lab it was being blasted with UV light to assess the protective power of sun cream.

As you might imagine, artificial skin is astronomically expensive stuff. Winter says that in 2011, a single sample cost $70.62.

That brings us back to the news that L’Oreal is partnering with Organovo, the San Diego-based lab that’s been pioneering 3D-printed tissue for almost a decade now. Organovo’s process is called bioprinting, and it uses super-precise printer head to print cells into specific structures using a scaffold (here’s a great explanation from io9’s George Dvorsky).

The idea, the companies say, is to apply Organovo’s bioprinting process to human skin, building on the decades of experience L’Oreal’s scientists have accumulated. Just as conventional 3D printing promises to make conventional manufacturing faster, cheaper, and more deft, bioprinting could do the same for a burgeoning skin manufacturing industry.

And the best part: It will help provide an alternative animal testing—and hopefully eradicate it forever.
 
How Cosmetics Companies Farm Human Skin to Test Their Products

1259101803527586629.png


As you slathered conditioner onto your noggin this morning, you probably weren’t thinking about lab-grown skin coins (if you were, r u ok?). But human skin grown in a lab is a booming business—and how it’s made is a little-known and fascinating story.

Yesterday, Bloomberg’s Caroline Winter brings us news of a new partnership between the tissue-printing company Organovo and beauty giant L’Oreal. Her post gives us an incredible glimpse into the world of skin farming, which L’Oreal has been doing for decades. “The company started farming derma back in the 1980s,” writes Winter nonchalantly. Hang on, what?

Though it sounds like something out of a horror movie, growing human tissue for testing purposes is a quite well-established industry. L’Oreal has been a pioneer in the practice, and it even runs a facility dedicated to it called the Predictive Evaluation Center, located in Lyon, France—along with a new lab in Shanghai.

The term “predictive evaluation” refers to testing how a new ingredient or product will affect human skin or eyes. That’s a job that is often—even today—relegated to lab animals, which are put through what amounts to torture in order to test the safety of new products. In the 1990s, L’Oreal started investing in producing skin to replace animal testing, and today, it’s completely free of the practice. Since 2008, the company says it’s tested 13,000 different products in its skin lab, from concealer to conditioner.


1259101803751661637.jpg


Right now, L’Oreal’s scientists grow these skin samples using left over mammary cells from plastic surgery. They’re carefully grown into tiny samples, which are used to test new products. The commercial name for this product is called Episkin, and L’Oreal actually sells samples of what it grows to other companies, too.

Here’s how The New York Times described the growth process inside the lab back in 2007:

To make Episkin, donor keratinocyte cells, collected after breast and abdominal plastic surgery, are cultured in tiny wells of collagen gel, immersed in water, amino acids and sugars, and then air-dried for 10 days or aged to mimic mature skin by exposure to UV light.

L’Oreal says it produces 130,000 tissue units every year, including skin but also epidermis and cornea—which is important when you’re testing products that come into contact with millions of eyes.

So, what happens once the tissue is grown? How are the products applied? A great 2012 story from The Telegraph about plastic surgery waste material has an account of the lab work being done using the manufactured skin:

On the day I visited, laboratory assistants were soberly applying pink hair conditioner on to trays of skin the size of a Polo mint, and as thin as a cigarette paper. In another lab it was being blasted with UV light to assess the protective power of sun cream.

As you might imagine, artificial skin is astronomically expensive stuff. Winter says that in 2011, a single sample cost $70.62.

That brings us back to the news that L’Oreal is partnering with Organovo, the San Diego-based lab that’s been pioneering 3D-printed tissue for almost a decade now. Organovo’s process is called bioprinting, and it uses super-precise printer head to print cells into specific structures using a scaffold (here’s a great explanation from io9’s George Dvorsky).

The idea, the companies say, is to apply Organovo’s bioprinting process to human skin, building on the decades of experience L’Oreal’s scientists have accumulated. Just as conventional 3D printing promises to make conventional manufacturing faster, cheaper, and more deft, bioprinting could do the same for a burgeoning skin manufacturing industry.

And the best part: It will help provide an alternative animal testing—and hopefully eradicate it forever.

:o:Creepy medical news.

It will help provide an alternative animal testing—and hopefully eradicate it forever.

:yahoo:This!!! I really hope so.
 
Fermionic microscope sees first light

PW-2015-05-19-Commissariat-fermion-1.jpg


A microscope that can see up to 1000 individual fermionic atoms has been developed by a team of physicists in the US. Using two laser beams, the research team traps a cloud of potassium atoms in an optical lattice, cools the atoms and then simultaneously images them. The new technique allows researchers to clearly resolve single fermions, directly observe their magnetic interactions and even detect entanglement within the ensemble.

Fermions are particles that have half-integer spin, and therefore are constrained by the Pauli exclusion principle, which dictates that no two identical fermions can occupy the same quantum state simultaneously. Fermions include many elementary particles – quarks, electrons, protons and neutrons – as well as atoms consisting of an odd number of these elementary particles. As a result, the collective behaviour of fermions is responsible for the structure of the elements in the periodic table, high-temperature superconductors, colossal magnetoresistance materials, the properties of nuclear matter and much more. Despite their importance, however, we still do not have a complete picture of strongly interacting systems of fermionic particles because they are notoriously difficult to image and study.

Researchers have been studying bosons – particles that have integer spin and can occupy the same quantum state – by cooling clouds of bosonic atoms down to temperatures near absolute zero to form a Bose–Einstein condensate and then studying their interactions. But doing the same with fermions is no mean feat – the exclusion principle does not allow two fermions to be in exactly the same state. Therefore, as more fermions are added to a system, each succesive one comes in at an increasingly higher energy, making the system very tricky to cool. Furthermore, ultracold atoms are easily perturbed by just the light from a single photon, which makes it difficult to confine atoms for long enough to obtain a clear image.

Supercool light

To get round these problems, Lawrence Cheuk, Martin Zwierlein and colleagues at the Massachusetts Institute of Technology have developed a microscopy technique that involves imaging the atoms with the same light that cools them. The fermions are first cooled to a temperature of just above absolute zero using standard methods, including laser cooling, magnetic trapping and evaporative cooling of the gas, until the temperature of all of the atoms is just above absolute zero. At this point the atoms settle into the wells of an optical lattice, thereby stopping any contact between neighbouring fermions and preventing them from interacting with each other. The optical lattice is located just 7 μm from the microscope's imaging lens, and is made of criss-crossing laser beams that form an "egg carton" structure with a fermion trapped in each well.

The atoms are then cooled even more by using two lasers, each at a different wavelength. This method makes use of Raman transitions: an atom absorbs one photon, is immediately stimulated to emit another and so drops down one vibrational level in the process. The location of each of the atoms is identified by the stimulated photon that it emits as it cools. These photons are captured by the microscope lens above the lattice, and this allows the team to detect the fermion's exact position within the lattice to an accuracy better than the wavelength of the light.

PW-2015-05-19-Commissariat-fermion-2.jpg

The atoms, potassium-40, are cooled during imaging by laser light, allowing thousands of photons to be collected by the microscope. (Courtesy: Lawrence Cheuk/MIT).

Using this method, Zwierlein and colleagues were able to cool and image more than 95% of the atoms in a potassium-40 gas cloud. The team was surprised to find that the fermions remained cold even after the imaging was complete. "That means I know where they are, and I can maybe move them around with a little tweezer to any location, and arrange them in any pattern I'd like," says Zwierlein. To make sure that their experiment did not suffer any light-assisted losses, the researchers looked at how the atoms move around between successive images, and at the statistics of how the atoms are distributed around the lattice. The team found that it was not losing a significant number of atoms.

Cold-atom toolbox

Chad Orzel, a physicist at Union College in the US who was not involved in the work, is impressed with the research because it opens up the possibility of using fermionic atoms to create a wider range of condensed-matter analogues. "If you look at the behaviour of bosons in an optical lattice, that's analogous to the behaviour of superconductors, where electrons have paired up to act like bosons. But a system of fermions in a lattice is more analogous to a normal conductor, where the electrons are subject to Pauli exclusion, and you can see other fun behaviours that way." He adds that with fermionic systems, "you can also think about using light fields to manipulate the interactions between atoms in interesting ways, and watch how particles move around". Orzel told physicsworld.com that Zwierlein's work is a nice addition to the cold-atom experimental toolbox. "Because the atoms are out in the open and directly imaged, you have all sorts of freedom to change parameters without needing to make whole new samples," he adds.

From Fermionic microscope sees first light - physicsworld.com
 
So some company wants to chat with me after 5PM and I said "hey that will screw up my commute home" and they replied "we'll call an Uber ride for your" (yeah no limo WTF :-))

So I looked them up and saw this:
Novogratz: Uber's valuation jumped $15 billion in one week - Business Insider

Uber is one of the most valuable private tech companies in the world. It has raised $5.9 billion at a $41 billion valuation, and a new round of funding — an additional $1.5 billion to $2 billion — would make Uber the most highly valued private tech company of all time, at over $50 billion.

Mike Novogratz, the president of $70 billion investment fund Fortress Investments, says that back when Uber was starting its roadshow to raise a recent round of funding, its valuation jumped $15 billion in just a week.

On Sunday's episode of "Wall Street Week," Novogratz said he had had an "interesting meeting" with former Uber CFO Brent Callinicos, whom Novogratz referred to only as "Uber's CFO" during his chat.

"Uber's the fastest-growing company, maybe in the history of the planet. And it's a brilliant idea, and they were raising capital," Novogratz said. "They had started their roadshow at a $25 billion valuation. And a week later the valuation had jumped to $40 billion."

"Wall Street Week" host Anthony Scaramucci interrupted Novogratz to ask him: "Did the valuation go up $15 billion just because they went on the roadshow and told the story?"

"Yes," Novogratz said.

Novogratz asked Uber's CFO how the company could justify such a lofty valuation. Novogratz said Callinicos told him Uber took between 20% and 25% of the fares paid to drivers, but Uber could later hike that up to 25% or 30%. "If you look at the growth and you add that extra pure margin, we'd make a lot more money and we justify this $40 billion valuation," Novogratz said, recalling what Callinicos told him.

Earlier this year, Novogratz and Fortress Investments backed Lyft, Uber's biggest US-based competitor.

You can watch the full episode below. Tune in around minute 11 for Novogratz talking about Uber.

----------------------------------------------------------------------
So I'm in the ride right now. So far so good.
 
Last edited:
This Vacuum Chamber Looks Like Some Futuristic Spaceship Corridor

1259326946849404994.jpg


Huge vacuum chambers on Earth are crucial for building and testing spacecraft, so we explore further into space. The photo above is Vacuum Chamber 5, where electric propulsion and power systems are being tested at Glenn Research Center.

According to NASA, the VF-5 is very special because it has “the highest pumping speed of any electric propulsion test facility in the world, which is important in maintaining a continuous space-like environment.” It’s here that NASA’s engineers test advanced Solar Electric Propulsion technology for future deep space exploration, including expeditions to Mars.

And it is awesome how VF-5 itself looks like a spaceship interior from a sci-fi movie, like a well-lit Nostromo or a rough-around-the-edges Discovery One. Take a closer look:

1259326946960505666.jpg


NASA explains the scene:

The cryogenic panels at the top and back of the chamber house a helium-cooled panel that reaches near absolute zero temperatures (about -440 degrees Fahrenheit). The extreme cold of this panel freezes any air left in the chamber and quickly freezes the thruster exhaust, allowing the chamber to maintain a high vacuum environment. The outer chevrons are cooled with liquid nitrogen to shield the cryogenic panels from the room temperature surfaces of the tank.

Most electric propulsion devices, such as Hall Thrusters, use xenon as a propellant, which is very expensive. By capturing the used xenon as ice during testing, researchers are able to recover the propellant to reuse, saving NASA and test customers considerable costs. The oil diffusion pumps along the bottom of the tank capped by circular covers use a low vapor pressure silicon oil to concentrate small amounts of gas to the point where it can be mechanically pumped from the chamber.





Diamond cavity boosts magnetic-field detection

NV-cavity.jpg


A new type of magnetometer based on diamond impurities has been unveiled by physicists in the US. The device is about 1000 times more sensitive than previous diamond-based sensors because it uses an optical cavity to concentrate laser light in the vicinity of the impurities. Although the new device cannot yet reach the sensitivity of some other types of magnetometers, the physicists believe that it offers significant practical advantages that will be useful to researchers in many fields, including those studying magnetic signals from the heart and brain.

The most precise magnetometers available today are superconducting quantum interference devices (SQUIDs) and atomic magnetometers, both of which can measure magnetic fields in the femtotesla range. However, the most sensitive SQUIDs must be operated at temperatures near absolute zero and atomic magnetometers need expensive and unwieldy vacuum and field-nulling systems. Sensors based on diamond impurities have the potential to be much more user-friendly because they use robust pieces of diamond and work at ambient pressures and temperatures. The devices make use of nitrogen vacancy (NV) centres, which occur when two adjacent carbon atoms in a diamond lattice are replaced by a nitrogen atom and a lattice vacancy. NV centres emit red light when excited by green light and the wavelength of this emitted light is shifted by the presence of an external magnetic field. Magnetic-field strength can be determined by measuring this shift and NV centres offer the added bonus of also being sensitive to small variations in relatively high fields.

Huge diamonds needed

Making a practical sensor remains a challenge, however, because NV centres are very weak absorbers of light. This means that the green light would need to travel about 1 m through a diamond to create enough red light to make a meaningful measurement. This distance could be decreased by using a diamond with a very high density of NV centres, but this would result in lower precision because the NV centres would interfere with each other.

Producing a 1 m-long diamond would be both difficult and expensive, so Dirk Englund and colleagues at the Massachusetts Institute of Technology took a different approach by having the green light bounce back and forth many times through a much smaller diamond. Their first attempt involved attaching special mirrors to the sides of a diamond to create an optical cavity. "We tried for close to a year and a half unsuccessfully," says Englund. "We also realized that, even if we did make such a cavity, it's relatively difficult to lock a laser to it – you'd have to have a specialized laser stabilized on a very narrow frequency."

Sparkling reflections

After this setback, the team realized that the diamond itself could act as the cavity. Diamonds are prized by jewellers precisely because they have a high refractive index, which causes light to bounce around inside them by total internal reflection and makes them sparkle. By injecting green light into a faceted edge of the diamond at a well-chosen angle, the researchers could make the light travel up to 1 m inside a diamond just 3 mm in length – with almost all the green light being absorbed along the way. As a result, a simple diode laser the size of a fingernail can be used to supply the green light. "It's quite possible we should have thought of it first," jokes Englund.

The researchers used their device to measure magnetic fields that varied at a frequency of 1 Hz and achieved a sensitivity of a few picotesla – about three orders of magnitude less sensitive than the best SQUIDs. The team is now looking at how to improve the sensitivity further by collecting the emitted light more efficiently. "There are definitely a few orders of magnitude to go," says Englund. At current or slightly improved sensitivity, the device could be useful for investigating the electrical activity of the heart or brain.

"The NV centre is a relatively new area that's only been around for five or 10 years," says Mike Romalis of Princeton University in New Jersey. He adds that Englund’s sensor is interesting because it is somewhat more practical than other NV devices and also because it works with low-frequency magnetic signals where a lot of practical magnetic fields exist. The absolute accuracy of the sensors is currently below that of atomic magnetometers of similar size, he says, but their ability to operate at ambient conditions is a big advantage for investigating living tissues.

From Diamond cavity boosts magnetic-field detection - physicsworld.com
 
How A Notorious Chemical Weapon Led To The Invention Of Chemotherapy

1259423409874037030.jpg


Mustard gas was, and is, one of the most terrifying weapons of war. It made people break out in blisters, and killed them slowly over weeks. It also inspired one of the first effective forms of cancer therapy. Here’s how.

Mustard gas lingers for days, making soldier’s skin break out in blisters and burning their eyes, noses, and lungs. When exposed to a lethal amount of the gas, a person could still take weeks to die. During the two world wars, scientists noted that the gas attacked nearly every part of the body. When doctors examined the bodies of soldiers killed or exposed to mustard gas, they saw that the attack included the immune system. Otherwise healthy men surrendered to disease.

This full destructiveness of the gas was really brought home for the Allied forces in 1943, when the SS John Harvey, an American ship carrying both mustard gas and military personnel, was bombed. The containers of mustard gas were breached, condemning many of the men who survived the initial sinking of the ship to death as they tried to swim through clouds of gas. Doctors examining the bodies afterward noted that the gas seemed to target the men’s white blood cells.

Horrific as they were, the gas attacks gave cancer researchers an idea. Leukemia and lymphoma are both cancers that develop in white blood cells. At the time there was no real treatment for those diseases. But any agent that could kill off healthy cells might kill off cancerous ones as well.

Mustard gas itself was out of the question, but the doctors came up with an alternate chemical called nitrogen mustard, and started looking for test cases. They found a man now known to us only as J. D. He was a lymphoma patient. The cancer had grown so vigorously that he could barely move due to swollen, painful lymph nodes. The researchers injected him nitrogen mustard, which, due to war security measures, they called “substance x,” and waited to see what would happen.

Although the injections he received didn’t save J. D., the treatment visibly helped the man. His pain decreased, his lymph nodes shrank, and he regained some of his mobility before he died. Doctors had found a new method of treating leukemia and lymphoma. Not only that, they found a way to treat cancer in general. Doctors could use chemicals to target cancer cells within the body, and each chemical could target a different type of cancer. Today we know it under the generalized name of chemotherapy. One of the most terrible weapons ever known led, through research and innovation, to a medical treatment that has saved countless lives.

@Oscar does the picture violate the ban of graphic images? I didn't mean for it to do so, but it is rather graphic. If it is a violation, I will happily change it.
 
Last edited:
Using electrochemistry, researchers create reconfigurable, voltage-controlled liquid metal antenna

Screen Shot 2015-05-20 at 6.20.49 AM.png


Researchers have held tremendous interest in liquid metal electronics for many years, but a significant and unfortunate drawback slowing the advance of such devices is that they tend to require external pumps that can't be easily integrated into electronic systems.


So a team of North Carolina State University (NCSU) researchers set out to create a reconfigurable liquid metal antenna controlled by voltage only, which they describe in the Journal of Applied Physics.

The team's work was inspired by a phenomenon recently observed during studies of liquid metal by coauthor Professor Michael Dickey's group within the Department of Chemical and Biomolecular Engineering at NCSU. By placing an electrical potential across the interface between the liquid metal and an electrolyte, they found that they could cause the liquid metal to spread by applying a positive voltage—or to contract by applying a negative voltage.

For a bit of background, the shape and length of the conducting paths that form an antenna determine its critical properties such as operating frequency and radiation pattern. "Using a liquid metal—such as eutectic gallium and indium—that can change its shape allows us to modify antenna properties more dramatically than is possible with a fixed conductor," explained Jacob Adams, coauthor and an assistant professor in the Department of Electrical and Computer Engineering at NCSU.

How did the team create the tunable antenna controlled by voltage only? By using electrochemical reactions to shorten and elongate a filament of liquid metal and change the antenna's operating frequency. Applying a small positive voltage causes the metal to flow into a capillary, while applying a small negative voltage makes the metal withdraw from the capillary.

The positive voltage "electrochemically deposits an oxide on the surface of the metal that lowers the surface tension, while a negative potential removes the oxide to increase the surface tension," Adams said. These differences in surface tension dictate which direction the metal will flow.

This advance makes it possible to "remove or regenerate enough of the 'oxide skin' with an applied voltage to make the liquid metal flow into or out of the capillary. We call this 'electrochemically controlled capillarity,' which is much like an electrochemical pump for the liquid metal," Adams noted.


Although antenna properties can be reconfigured to some extent by using solid conductors with electronic switches, the liquid metal approach greatly increases the range over which the antenna's operating frequency can be tuned. "Our antenna prototype using liquid metal can tune over a range of at least two times greater than systems using electronic switches," he pointed out.

Myriads of potential applications await within the realm of mobile devices. "Mobile device sizes are continuing to shrink and the burgeoning Internet of Things will likely create an enormous demand for small wireless systems," Adams said. "And as the number of services that a device must be capable of supporting grows, so too will the number of frequency bands over which the antenna and RF front-end must operate. This combination will create a real antenna design challenge for mobile systems because antenna size and operating bandwidth tend to be conflicting tradeoffs."

This is why tunable antennas are highly desirable: they can be miniaturized and adapted to correct for near-field loading problems such as the iPhone 4's well-publicized "death grip" issue of dropped calls when by holding it by the bottom. Liquid metal systems "yield a larger range of tuning than conventional reconfigurable antennas, and the same approach can be applied to other components such as tunable filters," Adams said.

What's next for the researchers? They've already begun exploring the fundamental and applied elements of tunable liquid metals. "There's still much to learn about the behavior of the surface oxides and their effect on the surface tension of the metal," Adams said. "And we're studying ways to further improve the efficiency and speed of reconfiguration."

In the long term, Adams and colleagues hope to gain greater control of the shape of the liquid metal—not only in one-dimensional capillaries but perhaps even two-dimensional surfaces to obtain nearly any desired antenna shape. "This would enable enormous flexibility in the electromagnetic properties of the antenna and allow a single adaptive antenna to perform many functions," he added.

 
Air Force's X-37B Space Plane Launches Secret Mission Today


Editor's Update for 11:30 am ET:
The Air Force's X-37B space plane has successfully launched on its fourth mission. Read our latest story: US Air Force Launches X-37B Space Plane on 4th Mystery Mission

The United States Air Force's X-37B space plane and a tiny solar-sailing spacecraft will launch into orbit today, and you can watch the liftoff live.

The unmanned X-37B spacecraft is scheduled to blast off atop a United Launch Alliance Atlas V rocket today (May 20) from Florida's Cape Canaveral Air Force Station, during a four-hour launch window that opens at 10:45 a.m. EDT (1445 GMT).

x-37b-space-plane-afspc-5-5.jpg


The Atlas V is also carrying 10 tiny "cubesats" to orbit, including one called LightSail, which was developed by the nonprofit Planetary Society. LightSail aims to test key technologies ahead of a more involved solar-sailing mission using another cubesat in Earth orbit next year.

Today's launch marks the fourth space mission — known as Orbital Test Vehicle 4 (OTV-4) — for the reusable X-37B space plane, which looks a bit like NASA's now-retired space shuttle. The X-37B is much smaller, however; two of these robotic space planes could fit inside the shuttle's payload bay.

Details about the X-37B's activities are classified, as are most of its payloads, so it's unclear what the space plane will be doing on orbit or how long it will be aloft. But Air Force officials have long maintained that the vehicle is not a space weapon, stressing that it simply tests technologies for reusable spacecraft and future missions.

The Air Force owns two X-37B vehicles, both of which were built by Boeing's Phantom Works division. The two space planes had combined to fly three missions before today. Those prior flights launched in April 2010, March 2011 and December 2012, and lasted for 225, 469 and 675 days, respectively.

LightSail, meanwhile, is scheduled to deploy its 344-square-foot (32 square meters) sail 28 days from now. Atmospheric drag will pull the cubesat back down to Earth two to 10 days after this occurs, Planetary Society representatives say, but the brief mission should show how well LightSail's attitude-control and sail-deployment systems work.

Today's launch (5/20/15)

 
The World's Oldest Stone Tools Were Not Made By Humans

1261036281281488530.jpg


Archeologists working in Kenya have discovered the world’s oldest stone tools. At 3.3 million years, they’re 700,000 years older than what were previously the most ancient stone tools ever discovered. In fact, they’re even older than humans.

Since io9 wrote about the discovery presented at a conference in April, the archeological team has published a paper in Nature with a bevy of new photos. These humble rocks may not look particularly exciting to the untrained eye. But to archeologists, these are clearly tools: anvils, sharp-edged flakes, and hammers. All told, researchers found 149 stone artifacts at a site in northern Kenya.

1261036284716824722.jpg


The tools were likely made with rudimentary techniques, as Smithsonian explains:

Further analysis of the markings on the tools and attempts to replicate their production suggests two possible ways: The toolmaker might have set the stone on a flat rock and chipped away at it with a hammer rock. Or, the toolmaker could have held the stone with two hands and hit it against the flat base rock.

But it’s really their age that’s surprising. Carbon isotope dating puts them before the emergence of Homo genus 2.8 million years ago. The discovery means scientists will have to rethink the current narrative of brain evolution in early hominins. Who actually made the tools is unknown. One suspect is Kenyanthropus platyops, first discovered in, yes, Kenya in 1999.

1261036284860546194.jpg


1261036284926346386.jpg


1261036285006777746.jpg


@levina Levina, I'm tagging you since you asked me a few days ago if there were any updates in this thread. There have been a few since then.
 
Back
Top Bottom