The Extraordinary Aside

Bond contracts and diplomatic notes aren’t the only places where the casual asides can be more rewarding than the main text.

NASA announced yesterday that its Kepler space telescope has helped scientists identify an exoplanet clearly positioned in an orbit that would allow it to have liquid water on its surface.

Twice before astronomers have announced planets found in that zone, but neither was as promising. One was disputed; the other is on the hot edge of the zone. Kepler 22-B is the smallest and the best positioned of the more than 500 planets found to orbit stars beyond our solar system to have liquid water on its surface — among the ingredients necessary for life on Earth.

Good news of course, and with its mass size estimated at 2.4 times Earth’s, it’s the closest match yet to our own. But did you see what the author did right after the word “of”? Mentioned that, just by the by, humanity has now found more than 500 planets orbiting stars beyond our solar system. Five. Hundred.

Moreover, “With the discovery, the Kepler space telescope has now located 2,326 potential planets during its first 16 months of operation.” I’ve written about this before, but it never ceases to amaze. This is what living in the future is like.

ps Six years ago, the smallest confirmed exoplanet had a mass size of about seven times Earth’s. The intervening years have tripled the precision of humanity’s detection capabilities.

On the value of “basic science” in economics and other disciplines

A few months ago, Larry Summers was reported to have made some comments regarding rules of thumb he used to distinguish between useful and not-so-useful economic papers when he was working in government:

He had a fairly clear categorisation for which ones were likely to be useful: read virtually all the ones that used the words leverage, liquidity, and deflation, he said, and virtually none that used the words optimising, choice-theoretic or neoclassical (presumably in the titles or abstracts).

However, while this sounds kind of harsh, he made sure to temper his criticism by saying that some seemingly useless things of apparently limited applicability might turn out to be useful in years to come (microfoundations for macroeconomics, perhaps?).

This last caveat is one I’ve frequently encountered in two contexts: From people who want to defend basic (natural) science, and from people who want to defend some discipline in economics that is just plain wacky. The argument is the same: It might turn out to be useful in the future.

Though true in the strict sense (I can’t rule out possible value coming from this research), the argument is frequently a “cheat”: I suspect that the person supporting basic science (or abstract economic theorizing) believes that this is nice and valuable intrinsically no matter what the usefulness of the results may turn out to be. But since this is a tough pitch to sell to the general public (especially for the economist), they try to say that “well, this could actually turn out to be valued highly by you even if you don’t care about the intrinsic value.” And yes, there are clear cases of (truly) useful things that came out of (seemingly) pointless and abstract theorizing. Here’s an example from the US Department of Energy:

The discovery that all matter comes in discrete bundles was at the core of forefront research on quantum mechanics in the 1920s. This knowledge did not originally appear to have much connection to the way things were built or used in daily life. In time, however, the understanding of quantum mechanics allowed us to build devices such as the transistor and the laser. Our present-day electronic world, with computers, communications networks, medical technology, and space-age materials would be utterly impossible without the quantum revolution in the understanding of matter that occurred seven decades ago. But the payoff took time, and no one envisioned the enormous economic and social outcome at the time of the original research.

However, it seems wrong (especially of an economist) to just transfer this argument from basic science (whether mathematics or theoretical physics or whatever) to economics. The reason is simple: Take two types of research. One (“applied research”?) is practical and will with high probability lead to valuable insights (in  terms of practical usefulness, economic value, material benefits to humanity or whatever). The other one (“basic research”?) is highly abstract and divorced from empirical applications and will with high probability fail to lead to such valuable insights. However, with both of them there is uncertainty, and we can imagine some probability distribution over “insight-value” that these could generate. It seems to me that unless we have reason to believe that the tail of the “basic science” distribution is fatter – i.e., unless the probability of making truly mind-blowing important progress  is higher for basic than for applied science – then we should always go for the applied in so far as the pragmatic value of the insights is what we want. The expected value would be higher, and the probability of an insight of any given value would be higher with the applied research. In other words, we need a “fat-tail” argument – an argument that the distributions will differ for observations lying far away from the mean. Since discussing differences in the tails of various distributions in another context was part of what made Summers resign as President of Harvard, this is a point I think he would get easily.

My point is just that I can see the possibility of this fat-tail argument in terms of certain types of basic science, but that does not mean it is present in economics. In physics there could be some argument such as “the higher the granularity and precision with which we can understand and manipulate the world around us, the more opportunities are open to us for manipulating it to our benefit,” and this can be supported by examples from experience. In mathematics there could be an argument that “the more analytical tools for a broader array of problems, the more mathematics will be able to power up other disciplines and improve their reach and value”. However, I am at a loss to see what more sophisticated representative agent-modelling in DSGE models or rational addiction models will give us. To me, such work seems more like Tolkienesque fantasy about alternate worlds. And if such fantasy about alternate probably-not-even-conceivably-realistic worlds can be useful – then the question is: Which ones are most likely to be useful, and how do we tell? Why representative agents deciding with optimal control theory? Why the (apparent) bias towards non-regulation and free markets?

Also – if such modeling divorced from evidence “could potentially” turn out to be useful – surely it could also “potentially” turn out to be harmful? For instance, if it misled (at times influential) economists into thinking that the world is simpler than it is and that it is imperative that we implement policies derived from such rational choice fan-fiction. An anecdote that may provide a possible example: Brooksley Born apparently, according to some, pushed hard for the regulation of a booming, wild-west-frontier derivatives market. In this she was stopped by President Clinton’s Working Group on Financial Markets. Alan Greenspan claimed that regulation could lead to financial turmoil, and at one point the very same Larry Summers we started with called her and said that

“You’re going to cause the worst financial crisis since the end of World War II.”… [Summers then said he had] 13 bankers in his office who informed him of this.

Dear Socar

Dear Socar, Socar Public Relations and Socar of Georgia (if your website is working),

Normally when I put 50 lari worth of gasoline into my car, I get about half a tank. Earlier this week, I visited one of your affiliates in Tbilisi, paid for 50 lari of gas (the price per liter did not seem significantly different from the other filling stations nearby) and drove off. The needle eventually showed that I had gotten about a quarter of a tank of gas.

If I could remember exactly which affiliate I had this experience at, I would be able to avoid it. But it may just be easier to avoid Socar stations entirely. And to share my experience.

Sincerely,

Doug Merrill

Twice as Fast

Four years ago, I was boggled to realize that astronomers had been finding planets around other stars at an average rate of one per month since the first exoplanet around a main-sequence star was discovered in 1995.

On Monday, scientists from the European Southern Observatory (ESO) announced that they had found 32 new exoplanets in recent work. Moreover, that brings the total found to roughly 400. Instead of discovering a new planet every month, the average is now much closer to every two weeks.

What is the goal? The astronomers announced their findings at a conference titled, “Towards Other Earths: perspectives and limitations in the [Extremely Large Telescope] era.” The ESO instruments have led to the detection of 24 of the 28 known exoplanets with masses of less than 20 times the earth’s. The technology to spot earth-like planets around other stars is either on the drawing board or under construction. Key puzzles are now in how to characterize atmospheres around exoplanets, and how to deduce other characteristics of earth-like planets that the astronomers expect to find.

And in two weeks, astronomers will likely have found another planet around a different star.

On being the right shape

Obsessing over strategic geography has a rather… twentieth century feel to it. Few now worry about the control of the Suez Canal, or the rights of warships to traverse the Bosphorus; far-flung scraps of land once valued as coaling stations and choke points are now important chiefly as tax havens and political distractions, and the various growths of Railway Imperialism have largely decayed back into the soil on which they were imposed. But there are a couple of areas that still pursue this approach to life. One, of course, is the subject of pipeline politics, amply discussed by m’colleagues, for example, here. Or here. Or here.

The other doesn’t get quite so much attention: Continue reading

This Little Piggie Went to Market

The EU Health Commissioner recommends avoiding non-essential travel to Mexico, and the first case of this variant of swine flu in Europe has been reported in Spain. The WHO has already got its Emergency Committee working; they had their first meeting on Saturday. And the Organization’s web site has an admirably complete set of links – background info, audio of the press briefing and conference, and their long-standing guidance on pandemic preparedness and response. There’s good background at the Flu Wiki.

There’s good news and bad news in this older AFOE post that talks about H5N1 and reviews an excellent book on the Spanish influenza of 1918. The short version: the social conditions that contributed to the death toll of 1918 are not present today; monitoring and international cooperation are much, much better. On the other hand, high mortality among younger adults (rather than among infants and the elderly) is a potential common element of the Spanish flu and this year’s swine flu.

Looks like we’re about to find out how much all the awareness raising and contingency planning that was done for H5N1 was worth.

Bad Russian Radar

An unexpected consequence of the North Korean attempted satellite launch was that it has demonstrated that Russian early-warning radar coverage is poor. Specifically, the Russians didn’t detect the North Korean launch at all; they picked up the object during its suborbital flight, but not during its ascent. This is worrying, because it suggests two things – first of all, that the Russians would only get warning of a missile launched from that direction when it was already about to re-enter the atmosphere, giving them very little time to analyse the situation, and secondly, that the US Groundbased Midcourse Defense interceptors based on the Pacific coast could, if launched to intercept a North Korean missile, appear on the Russian radars flying up over the edge of the Earth, as if they were incoming North Korean, Chinese, or US submarine-launched missiles.

This obviously involves some pretty awful risks, and it is another good reason to be sceptical of GMD; in a real crisis, would it actually be wise to fire it? If not, of course, it’s useless and the potential enemy can be expected to take account of that. Worse, however, is that the Russians are bound to consider a radar contact from that direction more threatening than one from over the Pole, from the West, or from the South, directions in which they have much better coverage. Therefore, the very fact of the weakness is destabilising; it increases the perceived importance of quick reaction, and therefore the coupling between Russian and other missile/radar complexes. With the increasing numbers of ballistic missiles in Asia – Indian as well as Chinese, North Korean, or submarine-based – this is not good news.

It’s been suggested that one solution would be a Joint Data Exchange Centre, a headquarters in Moscow in which US and Russian staff would swap information from their warning systems. This has a serious problem; if one party is willing to launch a first nuclear strike, they are surely also willing to feed fake data to the JDEC and to accept the imminent death of their representatives there. Unfortunately, it is unlikely to be credible. Hence another plan, RAMOS (Russian-American Missile Observation Satellite). This foresaw that the US would finance and help build a constellation of satellites similar to its own Defense Support Program birds, which detect rocket launches worldwide using infrared cameras, which would broadcast their data in the clear so that both powers (and anyone else) could receive it and use it independently. Both parties would participate in their development, and would be able to do anything they liked to verify the satellite before launching it on one of their own rockets. (Perhaps now we could publish the design under the GPL.)

This Clinton administration idea, however, failed to get funding back in 1999 and was promptly canned by the Bush administration as far too sane. Perhaps it could be resurrected. Or alternatively, whatever the Americans think, why shouldn’t the European Union do it? The radar position is not as bad in our direction, but the Russians have their own missile-defence interceptors that do fly out our way, and there was that horrible business with the Norwegian research rocket. We have a serious space industry, and the French would be wholly delighted; they consider space power to be a major national priority anyway. It’s better than relying on another Stanislas Petrov.

artificial eye

On the topic of European innovation, this demo application from the Nokia Forum rocks. Basically, it uses the Sensor API in the latest version of Symbian S60 and the phone camera to detect what you’re pointing the cam at, and show information related to it.

Tagging Barcelona

Tagging Barcelona

Naturally this information could be sucked in from the Web, which opens up the healthy possibility of not just user-generated, but unofficial user-generated markup for the cityscape with constant feedback. A simple implementation might do something like hashing the geographical position of the feature with its direction and appending that to a selected URL.

The real purpose of this is surely the old Surrealist aim, to bring the logic of the visible to the service of the invisible; to put in the horrible details of how that particular bank wants to pass the SKU of the item you just bought back to headquarters with the credit card authorisation request, all for your own good, or how the owners of such-and-such a monster warehouse ordered the staff to moon for the camera because the newspapers wrote bad things about them. (I agree, these examples are prosaic, but then, that’s me.)

Power and

Last week, some US-based bloggers were talking about their dissatisfaction with the term, “soft power.”

Matthew Yglesias:

[C]an we retire the term “soft power” already? I always feel that it’s been popularized not so much by Professor Nye as by deranged warmongers who like the idea of terming every alternative to militarism as somehow “soft,” fluffy, and weak. Soft Power is a good book, but it’s a bad coinage for an era in which national security issues have returned as a partisan political topic, and I don’t think it’s an especially great label for what Nye’s talking about.

Here’s a suggestion cribbed from an adaptation of an old tabletop game: power and influence. Roughly speaking, power is the ability to make people do things (or suffer the consequences); influence is the ability to get people to do things on their own (to gain the benefits). NATO has lots of power (and a good bit of influence), while the EU has an enormous amount of influence, but less power. Pointy-haired bosses use their power; good businesspeople use their influence.

Influence is not a second-rate type of power (soft rather than hard); it’s a separate, if related, capacity. So: power and influence.

I wrote to some of the folks whose blogs I cited. Everyone who has replied has been positive about the suggestion. Now to see if they will actually use it, and whether we can change the usage ourselves or whether we need Joe Nye to write an article.