Just a geek who lives in Olympia, WA with my wife, son, and animals, writing fiction that he hopes will make the world a better place someday.
164 stories
·
2 followers

Police Disabling Their Own Voice Recorders

2 Comments and 5 Shares

This is not a surprise:

The Los Angeles Police Commission is investigating how half of the recording antennas in the Southeast Division went missing, seemingly as a way to evade new self-monitoring procedures that the Los Angeles Police Department imposed last year.

The antennas, which are mounted onto individual patrol cars, receive recorded audio captured from an officer’s belt-worn transmitter. The transmitter is designed to capture an officer’s voice and transmit the recording to the car itself for storage. The voice recorders are part of a video camera system that is mounted in a front-facing camera on the patrol car. Both elements are activated any time the car’s emergency lights and sirens are turned on, but they can also be activated manually.

According to the Los Angeles Times, an LAPD investigation determined that around half of the 80 patrol cars in one South LA division were missing antennas as of last summer, and an additional 10 antennas were unaccounted for.

Surveillance of power is one of the most important ways to ensure that power does not abuse its status. But, of course, power does not like to be watched.

Read the whole story
Share this story
Delete
1 public comment
HarlandCorbin
8 days ago
reply
Harsh penalties for those missing antennas. Maybe change over to a cell-based system that streams the voice to secure servers.

Crisis/Short-Term Funding

1 Share

We’ve all seen it, over and over.  A bridge collapses, usually over a river.  Or here in Iron County, a landslide closes a state highway, and it takes eight months moving a huge chunk of  mountainside to repair the damage and re-open the road. All across the country, we have infrastructure teetering on the edge of collapse, with the potential to kill people and cause millions, if not hundreds of millions, of dollars in damage in each case. But nothing gets done until there’s a crisis, and then what’s done is often only the cheapest acceptable fix.

In the case of the landslide here, the eight-month closure was the third that has closed the highway in the twenty years we’ve lived here.  Independent engineers who’ve studied the road suggest that it should have been built on the other side of the canyon where the rock and ground are more stable.  They even suggested it after the last eight month closure and repair that added more than a 100 mile detour to the commutes, delivery routes, and local cargo haulage trips of local residents, businesses, and tourists.  The state highway department turned that proposal down, claiming it was too expensive.  Yet, if the highway had been built where the better engineers suggested, that section wouldn’t have to be rebuilt every five to ten years, and the overall cost to taxpayers would be less, not to mention the possibility of reducing fatalities.

Yet pretty much everywhere in the United States, and likely elsewhere in the world, since I doubt human nature changes that much once one crosses borders – with a few possible exceptions – the same sort of deferred maintenance or “do it cheap now” attitude prevails with regard to the basic structures of society, despite the fact that spending a few more dollars now would save more dollars and lives later.

Why?  Because there seems to be an attitude that keeping taxes as low as possible is prudent.  It’s not.  Keeping taxes as low as possible when calculating costs and expenditures over a twenty or thirty year period is prudent, but keeping them as low as possible every year and deferring every possible maintenance or construction project until something has to be done only results in higher taxes… and higher costs on the community.

There’s an old saying that expresses the point succinctly – “penny wise and pound foolish” – but sayings like that are somehow out of date, which is ironic since it’s usually the Republicans who are looking to cut government spending, even while they keep saying the support traditional values.

 

Read the whole story
Share this story
Delete

"Write me something fresh and new, but make it just like the last one"

1 Share

So, I was making slow but steady headway on "Invisible Sun" (Merchant Princes: The Next Generation #3) when I got bitten this morning by an Attack Novel. I mean, a rabid one. So far, I've confined myself to writing the first 2500 words of an outline; I plan to finish it today, stick it in a drawer to cool (or until the urge to create becomes irresistible), then go back to "Invisible Sun".

This isn't a unique event. You might have noticed Wednesday's wholly inappropriate blog entry about a political satire/thriller that is utterly unsaleable, revolving around the identity of the 2016 Republican Party Candidate for POTUS.

But there's more.

A couple of weeks ago, having publicly said a month earlier that I wisnae gonnae go there, I farted up a wholly new idea for another Near Future Scottish Police Procedural a la "Halting State"/"Rule 34"—only with a very different focus, and so different that I probably can't shoe-horn it into the niche of "The Lambda Functionary" (the planned third book in the trilogy). (It's about the homicide detective with a brain implant that keeps him from thinking he's dead, a viral encephalopathy pandemic that causes Cotard's Delusion, and an enforcer who goes around turning off zombies who've hacked the DRM on their implants. Yes, it's a cognitive zombie detective novel. No, I still can't write it—not until after we're past the Scottish political singularity. But at least I now know what it's about.)

And (I can admit it now) last summer I squirted out an entire unscheduled attack novel, "The Armageddon Score". It's a Laundry Files novel, but narrated by Mo, not Bob, and gives us a very different view of what's going on in the run-up to CASE NIGHTMARE GREEN. (Hopefully it'll come out next July.)

Anyway. I may be slow on the uptake, but I finally figured out what's going on in my head.

In 2008 I published "Saturn's Children". This was followed by "Wireless", a short story collection, and since then, every novel I have sold, written, or published has been part of an existing continuity or series.

It's true. If it's Laundry Files, it's in series. If it's Merchant Princes, it's in series. "Neptune's Brood" is in continuity with "Saturn's Children", and "Rule 34" was in continuity with "Halting State". I'm leaving out "The Rapture of the Nerds" because a collaboration is effectively a different author ...

... But I'm suffering withdrawal symptoms from creating something entirely ab initio.

Partly it's my own fault (Laundry Files novels are now pretty comfortable—I have a Method—and the Merchant Princes are a known quantity too), and partly it's a side-effect of the structure of publishing companies. While it's the job of a senior editor to acquire new books, another part of the job (which the public don't get to see) is that the editor has to sell the idea of the book to their marketing team, who in turn have to go forth and motivate the buyers for the various bookstore chains and wholesalers. It is much easier to sell another book in a series than to sell something wholly new, because it's simply that much easier to explain. You can replace a whole lot of brain-sweat and communication with a simple, "this is the next one in that series you sold last year". And so, whenever my agent and I sit down with an editor to discuss what I can write next year, we instinctively focus on what we sold last year.

But eventually something's got to give. Right now I'm writing the third volume of "Merchant Princes: The Next Generation". It will be followed by the rewrite/submission draft of Laundry Files book 6, and then (almost certainly) by Laundry Files book 7. By the time I've written "The Nightmare Stacks" in, say, early 2015, it will have been seven years since I was last let off the leash to write something wholly original. Nine novels will have passed under the bridge since then, in existing series. And I can feel the pressure to do something new beginning to build up.

PS: In case you were wondering? The outline I'm writing is for a Gothic architectural urban fantasy novel about a slowly dying family of magicians and the effects of the housing bubble on their ancestral home. And I am going to try not to write it before I've finished "Invisible Sun".

Read the whole story
Share this story
Delete

Lockstep in New York Review of Science Fiction

1 Comment

I'm finding that the more a reviewer knows classic space opera (the 20th century version) the more they "get" Lockstep.  Young Adult reviewers have been particularly kind, but now Derek Künsken, writing in The New York Review of Science Fiction, has explicitly compared Lockstep to its predecessors, and to what's often called the "new space opera."  In the article (which you can find here, mind that it's $2.99 to buy the issue) he takes as a challenge my own assertion that with this book I've reinvented space opera, and sets out to see whether I'm right.  To do this he compared the novel to its classic forerunners as well as recent works by Banks, Greenland, McCauley, McDonald, Reynolds and Stross.  He starts by admitting that 

Schroeder has preserved the interesting bits of the space opera setting, the light-year-spanning civilization, without jettisoning respect for known physics. This is an impressive addition to the canon.

His analysis is a fascinating read and a good reminder to those of us who've lost track over the years, of where this beloved branch of science fiction came from and what it's evolved into.  In doing so, he highlights one of the issues that led me to write the novel:  the pessimism of much of the current genre.  There's no sense of innocence in science fiction these days. Now, I'm a firm believer that SF needs to shed its technophilic naivete; the time has passed when we could write starry-eyed tales about how science will cure all our ills.  The hero of my long-running short story cycle, Gennady Malianov, is a pathologically shy Ukrainian arms inspector who, in tale after tale, ends up cleaning up the messes left by exactly that kind of naivete.  So, I'm right there.

However, not only is there space for a mature optimism in SF, I believe it's absolutely essential.  Anyone who has kids has to be an optimist, and we who are to bequeath a transformed world to our descendants are equally obligated, as a society, to work toward a positive future.  That doesn't preclude being grimly aware of the mess we're in and the messes we could still create, as Gennady well knows.  But it means we can still dare, and dream big, and care about the world we're for good or ill bringing into being.  Space opera is a primary myth-form for that civilizational task.

As Künsken puts it,

Schroeder does not undermine, as Letson and Wolfe noted for writers of new space opera, the optimism present in the classic space opera form?quite the opposite. Lockstep is a novel overflowing with the optimism of a simpler time, fully embracing in its tone the adolescent yearning for the adventure, grand gestures, and romance of the classic space opera. Lockstep asserts thematically that it is possible to go back, to recover that innocence of an earlier age.

 

So, in the end, does he think I've "reinvented" space opera?  Actually, no.  Instead, 

 

He created conditions under which the charm and wonder of classic space opera could live again. This is an equally valuable feat.

Good enough.  I'm happy now.

Read the whole story
Share this story
Delete
1 public comment
pawnstorm
35 days ago
reply
Whether or not Schroeder reinvented space opera, Lockstep was a fantastic read.
Olympia, WA

The Singularity Is Further Than It Appears

3 Shares

Time The Year We Become Immortal.jpgAre we headed for a Singularity? Is it imminent?

I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.

I think it's more complex than that, however, and depends in part on one's definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it's not on the immediate horizon.

VERNOR VINGE'S INTELLIGENCE EXPLOSION
My first experience with the term Singularity (outside of math or physics) comes from the classic essay by science fiction author, mathametician, and professor Vernor Vinge, The Coming Technological Singularity.

Vinge, influenced by the earlier work of I.F. Goode, wrote this, in 1993:


Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
[...]
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
[...]
When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -- on a still-shorter time scale.

I've bolded that last quote because it's key. Vinge envisions a situation where the first smarter-than-human intelligence can make an even smarter entity in less time than it took to create itself. And that this keeps continuing, at each stage, with each iteration growing shorter, until we're down to AIs that are so hyper-intelligent that they make even smarter versions of themselves in less than a second, or less than a millisecond, or less than a microsecond, or whatever tiny fraction of time you want.

This is the so-called 'hard takeoff' scenario, also called the FOOM model by some in the singularity world. It's the scenario where in a blink of an AI, a 'godlike' intelligence bootstraps into being, either by upgrading itself or by being created by successive generations of ancestor AIs.

It's also, with due respect to Vernor Vinge, of whom I'm a great fan, almost certainly wrong.

It's wrong because most real-world problems don't scale linearly. In the real world, the interesting problems are much much harder than that.

Consider chemistry and biology. For decades we've been working on problems like protein folding, simulating drug behavior inside the body, and computationally creating new materials. Computational chemistry started in the 1950s. Today we have literally trillions of times more computing power available per dollar than was available at that time. But it's still hard. Why? Because the problem is incredibly non-linear. If you want to model atoms and molecules exactly you need to solve the Schrodinger equation, which is so computationally intractable for systems with more than a few electrons that no one bothers.

Molecular Modelling Computational Complexity

Instead, you can use an approximate method. This might, of course, give you an answer that's wrong (an important caveat for our AI trying to bootstrap itself) but at least it will run fast. How fast? The very fastest (and also, sadly, the most limited and least accurate) scale at N^2, which is still far worse than linear. By analogy, if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself. That's not escape velocity.

We can see this more directly. There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.

Let's focus on as a very particular example: The Intel Corporation. Intel is my favorite example because it uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario. It has helped drive Moore's Law, which is impressive enough. But the time period for doublings seems to have remained roughly constant. Again, let's not underestimate how awesome that is. But it's not a sudden transcendence scenario. It's neither a FOOM nor an event horizon.

And, indeed, should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won't immediately be smarter than the entire set of humans and computers that built it, particularly when you consider all the contributors to the hardware it runs on, the advances in photolighography techniques and metallurgy required to get there, and so on. Those efforts have taken tens of thousands of minds, if not hundreds of thousands. The first smarter-than-human AI won't come close to equaling them. And so, the first smarter-than-human mind won't take over the world. But it may find itself with good job offers to join one of those organizations.

DIGITAL MINDS: THE SOFTER SINGULARITY
Recently, the popular conception of what the 'Singularity' means seems to have shifted. Instead of a FOOM or an event horizon, the definitions I saw most commonly discussed a decade ago, now the talk is more focused on the creation of digital minds, period.

Much of this has come from the work of Ray Kurzweil, whose books and talks have done more to publicize the idea of a Singularity than probably anyone else, and who has come at it from a particular slant.

Now, even if digital minds don't have the ready ability to bootstrap themselves or their successors to greater and greater capabilities in shorter and shorter timeframes,eventually leading to a 'blink of the eye' transformation, I think it's fair to say that the arrival of sentient, self-aware, self-motivated, digital intelligences with human level or greater reasoning ability will be a pretty tremendous thing. I wouldn't give it the term Singularity. It's not a divide by zero moment. It's not an event horizon that it's impossible to peer over. It's not a vertical asymptote. But it is a big deal.

I fully believe that it's possible to build such minds. Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains - just matter - do. Mind can be done in other substrates.

But I think it's going to be harder than many project. Let's look at the two general ways to achieve this - by building a mind in software, or by 'uploading' the patterns of our brain networks into computers.

Building Minds
We're living in the golden age of AI right now. Or at least, it's the most golden age so far. But what those AIs look like should tell you a lot about the path AI has taken, and will likely continue to take.

The most successful and profitable AI in the world is almost certainly Google Search. In fact, in Search alone, Google uses a great many AI techniques. Some to rank documents, some to classify spam, some to classify adult content, some to match ads, and so on. In your daily life you interact with other 'AI' technologies (or technologies once considered AI) whenever you use an online map, when you play a video game, or any of a dozen other activities.

None of these is about to become sentient. None of these is built towards sentience. Sentience brings no advantage to the companies who build these software systems. Building it would entail an epic research project - indeed, one of unknown length involving uncapped expenditure for potentially decades - for no obvious outcome. So why would anyone do it?

IBM's Watson ComputerPerhaps you've seen video of IBM's Watson trouncing Jeopardy champions. Watson isn't sentient. It isn't any closer to sentience than Deep Blue, the chess playing computer that beat Gary Kasparov. Watson isn't even particularly intelligent. Nor is it built anything like a human brain. It is very very fast with the buzzer, generally able to parse Jeopardy-like clues, and loaded full of obscure facts about the world. Similarly, Google's self-driving car, while utterly amazing, is also no closer to sentience than Deep Blue, or than any online chess game you can log into now.

There are, in fact, three separate issues with designing sentient AIs:

1) No one's really sure how to do it. AI theories have been around for decades, but none of them has led to anything that resembles sentience. My friend Ben Goertzel has a very promising approach, in my opinion, but given the poor track record of past research in this area, I think it's fair to say that until we see his techniques working, we also won't know for sure about them.

2) There's a huge lack of incentive. Would you like a self-driving car that has its own opinions? That might someday decide it doesn't feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that's extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Many of us want the semblance of sentience. There would be lots of demand for an AI secretary who could take complex instructions, execute on them, be a representative to interact with others, and so on. You may think such a system would need to be sentient. But once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognize speech, would need to be sentient. It doesn't need to be. You can have your AI secretary or AI assistant and have it be all artifice. And frankly, we'll likely prefer it that way.

3) There are ethical issues. If we design an AI that truly is sentient, even at slightly less than human intelligence we'll suddenly be faced with very real ethical issues. Can we turn it off? Would that be murder? Can we experiment on it? Does it deserve privacy? What if it starts asking for privacy? Or freedom? Or the right to vote?

What investor or academic institution wants to deal with those issues? And if they do come up, how will they affect research? They'll slow it down, tremendously, that's how.

For all those reasons, I think the future of AI is extremely bright. But not sentient AI that has its own volition. More and smarter search engines. More software and hardware that understands what we want and that performs tasks for us. But not systems that truly think and feel.

Uploading Our Own Minds
The other approach is to forget about designing the mind. Instead, we can simply copy the design which we know works - our own mind, instantiated in our own brain. Then we can 'upload' this design by copying it into an extremely powerful computer and running the system there.

I wrote about this, and the limitations of it, in an essay at the back of my second Nexus novel, Crux. So let me just include a large chunk of that essay here:

The idea of uploading sounds far-fetched, yet real work is happening towards it today. IBM's 'Blue Brain' project has used one of the world's most powerful supercomputers (an IBM Blue Gene/P with 147,456 CPUs) to run a simulation of 1.6 billion neurons and almost 9 trillion synapses, roughly the size of a cat brain. The simulation ran around 600 times slower than real time - that is to say, it took 600 seconds to simulate 1 second of brain activity. Even so, it's quite impressive. A human brain, of course, with its hundred billion neurons and well over a hundred trillion synapses, is far more complex than a cat brain. Yet computers are also speeding up rapidly, roughly by a factor 100 times every 10 years. Do the math, and it appears that a super-computer capable of simulating an entire human brain and do so as fast as a human brain should be on the market by roughly 2035 - 2040. And of course, from that point on, speedups in computing should speed up the simulation of the brain, allowing it to run faster than a biological human's.

Now, it's one thing to be able to simulate a brain. It's another to actually have the exact wiring map of an individual's brain to actually simulate. How do we build such a map? Even the best non-invasive brain scanners around - a high-end functional MRI machine, for example - have a minimum resolution of around 10,000 neurons or 10 million synapses. They simply can't see detail beyond this level. And while resolution is improving, it's improving at a glacial pace. There's no indication of a being able to non-invasively image a human brain down to the individual synapse level any time in the next century (or even the next few centuries at the current pace of progress in this field).

There are, however, ways to destructively image a brain at that resolution. At Harvard, my friend Kenneth Hayworth created a machine that uses a scanning electron microscope to produce an extremely high resolution map of a brain. When I last saw him, he had a poster on the wall of his lab showing a print-out of one of his brain scans. On that poster, a single neuron was magnified to the point that it was roughly two feet wide, and individual synapses connecting neurons could be clearly seen. Ken's map is sufficiently detailed that we could use it to draw a complete wiring diagram of a specific person's brain.
Unfortunately, doing so is guaranteed to be fatal.

The system Ken showed 'plastinates' a piece of a brain by replacing the blood with a plastic that stiffens the surrounding tissue. He then makes slices of that brain tissue that are 30 nanometers thick, or about 100,000 times thinner than a human hair. The scanning electron microscope then images these slices as pixels that are 5 nanometers on a side. But of course, what's left afterwards isn't a working brain - it's millions of incredibly thin slices of brain tissue. Ken's newest system, which he's built at the Howard Hughes Medical Institute goes even farther, using an ion bean to ablate away 5 nanometer thick layers of brain tissue at a time. That produces scans that are of fantastic resolution in all directions, but leaves behind no brain tissue to speak of.

So the only way we see to 'upload' is for the flesh to die. Well, perhaps that is no great concern if, for instance, you're already dying, or if you've just died but technicians have reached your brain in time to prevent the decomposition that would destroy its structure.

In any case, the uploaded brain, now alive as a piece of software, will go on, and will remember being 'you'. And unlike a flesh-and-blood brain it can be backed up, copied, sped up as faster hardware comes along, and so on. Immortality is at hand, and with it, a life of continuous upgrades.
Unless, of course, the simulation isn't quite right.

How detailed does a simulation of a brain need to be in order to give rise to a healthy, functional consciousness? The answer is that we don't really know. We can guess. But at almost any level we guess, we find that there's a bit more detail just below that level that might be important, or not.

For instance, the IBM Blue Brain simulation uses neurons that accumulate inputs from other neurons and which then 'fire', like real neurons, to pass signals on down the line. But those neurons lack many features of actual flesh and blood neurons. They don't have real receptors that neurotransmitter molecules (the serotonin, dopamine, opiates, and so on that I talk about though the book) can dock to. Perhaps it's not important for the simulation to be that detailed. But consider: all sorts of drugs, from pain killers, to alcohol, to antidepressants, to recreational drugs work by docking (imperfectly, and differently from the body's own neurotransmitters) to those receptors. Can your simulation take an anti-depressant? Can your simulation become intoxicated from a virtual glass of wine? Does it become more awake from virtual caffeine? If not, does that give one pause?

Or consider another reason to believe that individual neurons are more complex than we believe. The IBM Blue Gene neurons are fairly simple in their mathematical function. They take in inputs and produce outputs. But an amoeba, which is both smaller and less complex than a human neuron, can do far more. Amoebae hunt. Amoebae remember the places they've found food. Amoebae choose which direction to propel themselves with their flagella. All of those suggest that amoebae do far more information processing than the simulated neurons used in current research.

If a single celled micro-organism is more complex than our simulations of neurons, that makes me suspect that our simulations aren't yet right.

Or, finally, consider three more discoveries we've made in recent years about how the brain works, none of which are included in current brain simulations.
First, there're glial cells. Glial cells outnumber neurons in the human brain. And traditionally we've thought of them as 'support' cells that just help keep neurons running. But new research has shown that they're also important for cognition. Yet the Blue Gene simulation contains none.

Second, very recent work has shown that, sometimes, neurons that don't have any synapses connecting them can actually communicate. The electrical activity of one neuron can cause a nearby neuron to fire (or not fire) just by affecting an electric field, and without any release of neurotransmitters between them. This too is not included in the Blue Brain model.

Third, and finally, other research has shown that the overall electrical activity of the brain also affects the firing behavior of individual neurons by changing the brain's electrical field. Again, this isn't included in any brain models today.

I'm not trying to knock down the idea of uploading human brains here. I fully believe that uploading is possible. And it's quite possible that every one of the problems I've raised will turn out to be unimportant. We can simulate bridges and cars and buildings quite accurately without simulating every single molecule inside them. The same may be true of the brain.

Even so, we're unlikely to know that for certain until we try. And it's quite likely that early uploads will be missing some key piece or have some other inaccuracy in their simulation that will cause them to behave not-quite-right. Perhaps it'll manifest as a mental deficit, personality disorder, or mental illness.Perhaps it will be too subtle to notice. Or perhaps it will show up in some other way entirely.

But I think I'll let someone else be the first person uploaded, and wait till the bugs are worked out.

In short, I think the near future will be one of quite a tremendous amount of technological advancement. I'm extremely excited about it. But I don't see a Singularity in our future for quite a long time to come.

Ramez Naam is the author of Nexus and Crux. You can follow him at @ramez.

Read the whole story
Share this story
Delete

Can Homemade Liquor Jumpstart A Local Economy?

1 Share

Can Homemade Liquor Jumpstart A Local Economy?

Listen to the Story

Audio for this story from
will be available at approximately 7:00 p.m. ET.
  • Transcript
  • i i

    Agave, or "maguey," planted a decade ago on Edgar's father's land.

    Marianne McCune/NPR

    Back in 2004, the tiny towns across the mountains of Oaxaca, Mexico had a problem. Everyone was leaving. There were no jobs, and people were flocking to California looking for work.

    No one knew how to stop the mass exodus but some thought a local liquor, mezcal, might prove to be the answer. Mezcal is made from a plant called maguey, a type of Agave. Much like Champagne, there is only one place in the world that you can make mezical: the mountains of Oaxaca, Mexico and the surrounding area. Most of it is produced in the valley and sold cheap to big liquor companies.

    But high in the Sierra Norte, the grand plan was to mount an international business exporting a little-known alcohol from a tiny town in the mountains. It seemed nearly impossible.

    One decade later, two cousins named Edgar and Elisandro are making that dream come true.

    It all started with the movie "A Walk in the Clouds," in which, Keanu Reeves follows a love interest to her father's winery. Though he knows it sounds corny, Edgar, the older cousin, said the depiction of life on the vineyard inspired him.

    "I thought, I want to do that. I want to live like that." Edgar knew he couldn't grow grapes, but what about agave? He decided he would make mezcal.

    Edgar, too, hoped to bring jobs to his region. But in order to get started he did exactly the thing his scheme aimed to prevent. He migrated to California, where he could make enough money to start growing agave back home.

    Edgar tried to get his brothers and sisters to join his project, but to them, it sounded crazy. Only Elisandro, his 19-year old cousin, saw its merit.

    "Living in Silicon Valley where companies are popping out back and forth and left and right. It's an inspiration to me," he said, "To start something that's against everything."

    Year after year, the two cousins defied everyone, sending money home for seeds and land. They faced many hurdles. Not least of which was that neither had ever started a business and neither had ever made mezcal. When the plants began to ripen, the two divvied up responsibilities. Edgar would go back to Oaxaca and learn to make mezcal while Elisandro would go to college and learn how to launch a business.

    The 'palenque,' the workshop where they make mezcal, is at the bottom of the slopes the agave plants grow on. i i

    The 'palenque,' the workshop where they make mezcal, is at the bottom of the slopes the agave plants grow on.

    Marianne McCune/NPR

    They bought equipment, filed for permits, and had babies.

    It hasn't been easy. Edgar, for instance, has to walk an hour to his workshop every day. He and his wife and children still live at his parents' little house. Elisandro moved back to the US so he could keep financing the project with his salary as a bartender.

    Edgar Gonzalez tries to fix bugs in the bottling machine before a state inspector comes to oversee their first batch of mezcal for export. i i

    Edgar Gonzalez tries to fix bugs in the bottling machine before a state inspector comes to oversee their first batch of mezcal for export.

    Marianne McCune/NPR

    After more than ten years with basically nothing to show for all their effort, they finally have a product.

    A few months ago they hired three single moms to work the production line and finally bottled their first batch of Mezcal Tosba for export. Of the 2000 bottles they sent across the border, Elisandro has already sold three quarters of them to fancy bars and restaurants in San Francisco, LA and Seattle.

    A bottle of Mezcal Tosba. i i
    Marianne McCune/NPR

    They're not making millions (yet). But they are providing jobs.They're even collaborating with mezcal producers from other villages in their region, in hopes they can grow the business across the region. First priority, though, a car for Edgar. And maybe a house.

    Copyright 2014 NPR. To see more, visit http://www.npr.org/.
    Read the whole story
    Share this story
    Delete
    Next Page of Stories