Technology – Literary Hub https://lithub.com The best of the literary web Tue, 07 Nov 2023 21:56:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 80495929 Magnets, How Do They Work? On the Magic of Magnetic Force https://lithub.com/magnets-how-do-they-work-on-the-magic-of-magnetic-force/ https://lithub.com/magnets-how-do-they-work-on-the-magic-of-magnetic-force/#respond Thu, 16 Nov 2023 09:15:19 +0000 https://lithub.com/?p=229423

A message had arrived at the telegram office that morning. As the mailman approached the seaside apartment in Mumbai, India, that my grandfather Brij Kishore shared with my grandmother Chandrakanta and their four children, he felt his throat tighten as she pulled on his sleeve and said, “Taar aaya hai.” In Bombay in the 1960s, the arrival of a “taar”—a telegram—usually meant bad news. Few homes had telephones, so far-­flung family would send updates of their children, cooking, and cricket scores via the well-­named snail mail. Only if a matter was more pressing and urgent would they send the news via telegram.

Babuji, as we all called him, tore open the envelope and took out a sheet of pale blue paper. On it was glued a strip of white paper that contained three words: ANXIOUS TO RETURN. He looked at his wife, rolled his eyes, and reassured her that there was nothing to worry about.

After graduating college, Babuji’s son Shekhar had traveled to Italy to look for work. Evidently, he didn’t like it there and wanted to return, but Babuji was determined that Shekhar should give it a shot. So, he put on his chappals and walked down to the post office to send a telegram saying so, using as few words as possible, because telegrams weren’t cheap and were charged by length. Over the next few weeks, many more telegrams arrived from Italy, begging for a ticket back to Bombay. After ignoring many of them, Babuji finally relented. His son, my uncle, returned to Bombay, where he lived out his days.

Magnets—­or objects that exert magnetic forces—­exist in the very essence of our universe.

Less than sixty years later, each week of pandemic lockdown was punctuated by the demanding squeaks of my toddler: “I want talk Nani right now!” The child of a pandemic, there were eighteen months of her life when she couldn’t see her grandparents in person, so her demands to speak to her grandma were swiftly obeyed. With the swish of a finger on a touchscreen, a call flew through the air to the other side of the planet, to which my mom responded. She saw my daughter crawl for the first time, and speak her early words, in color, live, on the screen of a smartphone. When I stop to think about the ease with which we were able to stay in touch through those tough times, I find myself not only in awe of how far we have come but also immensely grateful.

We have been through a radical shift in technology across just three generations of my family, and each step of the way has changed our lives dramatically, just as they did for society as a whole: allowing us to communicate with our loved ones, creating the world of instant news, changing the way we work, and altering the way we entertain and are entertained. But while a video call may seem a far cry from the telegram, all these forms of modern communication are based on the science of signals being sent from one distant point to another, almost instantaneously. And our ability to do that centers around magnets.

I find magnets magical. The magnetic fields that radiate from them are invisible, but they can be substantial, far-­reaching, and influential across large distances. The science is complex and wasn’t understood for thousands of years—­indeed, many physicists will tell you that magnetism, and especially electromagnetism, still isn’t fully understood. But once we had at least some understanding, we were able to create practical mechanisms. Humans harnessed the magic of magnets to create machines that could interact and exert forces on other machines, farther away than had ever been thought possible.

Unlike the inventions we’ve looked at so far, magnets—­or objects that exert magnetic forces—­exist in the very essence of our universe. You and I are magnets (very, very weak ones—­don’t worry, there’s no danger of us suddenly becoming attached to our refrigerators). Atoms, the minuscule building blocks of matter, are magnetic. The planet on which we live is a giant magnet. Magnets, unlike wheels and nails and springs, were discovered rather than invented by humans. Despite this, they nonetheless deserve their place in this book, because it was humans who figured out how to make them more useful than they were as supplied by Mother Nature. The magnets we found naturally in our surroundings a few thousand years ago were weak and hard to come by. They were formed of magnetite, which came to be known as lodestone, a natural mineral found in the earth that is a mix of iron and oxygen, plus other impurities. It’s a magnetic material, but only a small proportion of the magnetite that exists in nature is magnetic, because it needs both a specific combination of impurities inside it, and to have been exposed to specific conditions of heat and magnetic fields outside it.

The earliest references to this natural magnet date back to ancient Greece in the sixth century BCE. Around two hundred years later, the Chinese documented the phenomenon of a natural stone attracting iron, and in another four hundred years, they began using this material for geomancy (a form of divination). It took another thousand years, advancing into the Middle Ages, before it was used for navigation in the form of a compass. Navigators in the Song Dynasty in China shaped lodestone to look like a fish, and let it float freely in water, so it pointed south. This knowledge spread to Europe and the Middle East soon after. Even then, with over a thousand years of knowing about natural magnets, we couldn’t replicate them, and their use was restricted to navigation.

Magnets themselves come in two distinct forms: permanent magnets and electromagnets. Permanent magnets are the horseshoe-­ and bar-­shaped magnets we saw in school science demonstrations and those that decorate our refrigerators. They have two poles, north and south: bringing together the south poles or the north poles of two magnets creates a pushing or repulsion force, but bring a north and south pole together and the magnets will cling to each other.

It took millennia to come to grips with how magnetism works, because this requires an advanced understanding of atomic physics and material science. To become a magnet, a material requires many particles, at many different scales, behaving in a very particular way. Let’s start with the electrons that orbit the nucleus of an atom. Just as electrons have a negative electric charge, they also have what physicists call spin, which defines its magnetic characteristics. By “pointing” in different directions, the spin cancels out the magnetic forces of electrons entirely in some atoms, leaving them nonmagnetic. But in others, while some of the electrons are arranged so their spin cancels out, not all are, so there is a net magnetic force left over, creating a magnetic atom.

Then, if we zoom out from the electron scale to the atomic scale, the atoms in an element are naturally arranged at random, which means that the magnetic forces of the individual atoms cancel each other out. In some materials, however, little pockets of atoms—­called domains—­have atoms all arranged in the same direction, giving the domain a net magnetism. However, they are not yet magnets, because the domains themselves are usually arranged at random.

To make a material produce a net magnetism, then, the atoms in the majority of the domains need to be forced into magnetic alignment by a strong external magnetic field, or by large amounts of heat applied at particular temperatures in particular sequences. Once the domains point in the same direction, you have a magnet.

Even today, there is a debate as to how magnetite becomes magnetized in the first place, so artificially replicating this has been a challenge. Certain materials like iron, cobalt, and nickel have electrons favorably arranged to make their atoms magnetic, which in turn sit in well-­defined domains. Our ancestors tinkered with mixes of such metals, heating and cooling them in various combinations to try to figure out the best recipe for forming permanent magnets. They succeeded, to a degree, making somewhat weak magnets that didn’t hold their force for long.

The development of permanent magnets in a scientific way started in the seventeenth century, when Dr. William Gilbert published De Magnete, which outlined his experimentation with magnetic materials. In the eighteenth and nineteenth centuries, we developed more sophisticated methods for making iron and steel, and observed that certain combinations made much stronger or longer-­lasting magnets—­and sometimes even both. But we still didn’t really understand why. The nineteenth century also saw the advent of understanding electromagnetism, which we’ll come back to, but it took until the twentieth century and the conception of quantum physics before we were able to define and understand atoms and electrons well enough to create strong and long-­lasting permanent magnets ourselves.

This led to the use of three types of materials to make permanent magnets: metals, ceramics, and rare-­earth minerals. The first major improvement was the development of a metal mix of aluminum-­nickel-­cobalt, used to make “alnico” magnets, but these were complicated and expensive to make. Then in the 1940s, ceramic magnets were created from pressing together tiny balls of barium or strontium with iron. These were much cheaper, and today account for the vast majority of permanent magnets produced by weight. The third family of materials are the rare-­earth magnets, based on elements like samarium, cerium, yttrium, praseodymium, and others.

It is electromagnetism and electromagnetic waves that form the basis of our long-­range communication technology.

Within the space of the last century, these three types of permanent magnets have been refined to produce produced magnetic fields 200 times stronger than before. And this improved efficiency led to permanent magnets playing an important role in much of our modern lives: a car, for example, can have thirty separate applications for magnets, using over 100 individual magnets. Thermostats, door latches, speakers, motors, brakes, generators, body scanners, electric circuitry and components—­take any of these apart and you’ll find permanent magnets.

But as we saw, the stories of permanent magnets and electromagnets intertwine, and since the discovery of electromagnets around 200 years ago, each has swung in and out of favor as humanity learned more about how they worked and what they could be used for. The prevalence of permanent magnets in the past few decades is due not just to their increasing strength and compactness but also to the fact that, unlike electromagnets, they never need a source of power. But from the nineteenth century onward, and even today in situations where immense fields are needed, electromagnets dominated. We can control their strength, switching off or cranking up the magnetic field of an electromagnet when it suits.

The reason electromagnets took so long to make an appearance in the field is because we needed an understanding of the science of materials, electricity, and light—­and the mysterious force of electromagnetism. It’s only when we were able to move electrons in materials that we understood how to create and change this force and apply it to our technology.

Like gravity, electromagnetism is one of the fundamental forces in nature. It is the physical interaction that happens between particles, like electrons, that have an electric charge. In the late eighteenth and early nineteenth centuries, André-­Marie Ampère, Michael Faraday, and other scientists published numerous theories about electric and magnetic fields, which were eventually brought together and summarized by the mathematician James Clerk Maxwell in what are now known as “Maxwell’s equations.” These gave us crucial information that led to the invention of electric motors, and these equations are also the basis of our power grids, radios, telephones, printers, air conditioners, hard drives, and data-­storage devices; they are even used in the creation of powerful microscopes.

The key principle that led to such technological advancement was the realization that moving charges create magnetic fields. Without getting too deep into the complex science, this means that if an electric current is flowing through a coil of wire, it behaves like a magnet. If you change the strength of the current, you change the strength of the magnet. And the converse is also true: applying a variable magnetic field near a wire will create an electric current in the wire. Following on from this science, experiments proved that when a charge, like an electron, moves within a magnetic field (either freely or inside a wire), it feels a pushing force.

Studying the electromagnetic force led us to define the phenomenon of electromagnetic waves. Think of these as waves of force that flow because of the interaction between electric and magnetic fields. Our understanding of light increased manifold when we were able to quantify it as an electromagnetic wave. And, in addition to visible light, we saw that a whole spectrum of electromagnetic waves—­from radio waves (with the longest wavelength) to gamma rays (with the shortest)—­exists, and that these waves can be used in different ways. It is electromagnetism and electromagnetic waves that form the basis of our long-­range communication technology: the technology used by countless people around the world to share news with their loved ones. People like my uncle, the prolific sender of telegrams.

__________________________________

From Nuts and Bolts: Seven Small Inventions That Changed the World in a Big Way by Roma Agrawal. Copyright © 2023. Available from W.W. Norton & Company.

]]>
https://lithub.com/magnets-how-do-they-work-on-the-magic-of-magnetic-force/feed/ 0 229423
In Praise of the Tangible Sacredness of the Printed Word https://lithub.com/in-praise-of-the-tangible-sacredness-of-the-printed-word/ https://lithub.com/in-praise-of-the-tangible-sacredness-of-the-printed-word/#respond Mon, 13 Nov 2023 09:50:32 +0000 https://lithub.com/?p=229256

An imposing six foot by six foot steel box in mid-century medical gray with two projectors on either end and a pair of binoculars in the middle, the Hinman Collator looks more like something used by neurologists to diagnose brain tumors than a machine for analyzing Renaissance literary texts. The eponymous invention was fashioned by Charlton Hinman, a former Second-World-War-cryptologist-turned-Rhodes-Scholar who enjoyed tremendous success as a bibliographer and scholar of what’s come to be known as “book history.” Employed at various points by both Johns Hopkins and the University of Kansas, Hinman’s celebrated “Collator” deployed a combination of mirrors and lights to give scholars the ability to compare superficially identical pieces of print. The inventor himself noted in The Papers of the Bibliographical Society of America that his device was “at once awesome and a little ridiculous,” comparing it to a Rube Goldberg machine and admitting that it “presents an especially odd spectacle when in use.” The Collator was designed so that when somebody peered through the binoculars they would be presented with the optical illusion of the two different pages being superimposed upon each other, a trick that would make any variations between those works immediately obvious (as opposed to the laborious work of comparing those texts letter-by-letter). “Trust not my reading nor my observations,” writes Shakespeare in Much Ado About Nothing, “Which with experimental seal doth warrant/The tenor of my book.” Perhaps, rather, trust the Collator.

If the relic is where matter finds its apotheosis, then it’s hard not to see the folio as a sacred object.

The machine allowed Hinman to compare fifty-five supposedly identical copies of William Shakespeare’s first folio—the 1623 comprehensive printing of thirty-six of the Bard’s plays, with nineteen of them reproduced for the first time, now celebrating its four-hundredth anniversary—and to conclude that there were slight variations across all of these individual books. None of the folios were identical; sometimes a letter or word would be different, evidence of the typesetter at William Jaggard’s London printshop tinkering with errors throughout the process. Or, because every soft tin and lead sort would degrade a bit with each printing due to friction, Hinman was able to ascertain the rough order in which each individual book rolled off the press by charting that degradation to individual printed letters over time. With lenses, light bulbs, mirrors, glass, and metal, Hinman was able to illuminate (literally) the particulars of Shakespearean book production. A 1950 New York Times article notes that “During a six weeks’ period in the summer of 1949, Dr. Hinman worked in the Folger Library and compared 3,000 pages from the First Folio. With his Collator he completed in six weeks a job it would take one man two years of careful reading to accomplish.” Helpful mostly to bibliographers, fifty-nine of the devices were manufactured by Arthur M. Johnson, a former Naval officer based in Silver Spring, Maryland, who sold them to libraries, universities, and according to legend, the CIA, the agency having noted Hinman’s expertise during the war in analyzing aerial bombardment photographs using a similar mechanism.

Nobody in my generation of Renaissance scholars, or the generation who trained me, or even the generation which trained them, has ever worked with a Hinman Collator, and today, a quantitative humanist would undoubtedly use digital technology. Despite having long since become obsolete, both the Hinman Collator and its inventor—the ingenious nuclear age humanist who worked like a scientist—have developed a quasi-mythic reputation. Once while in Scotland, I heard the influential  Shakespearean and materialist literary theorist John Drakakis wax rhapsodic about the Collator in a manner that was almost sensual, and I have to say that I get it. Having never worked with a Hinman Collator, I have had an opportunity to examine one, most recently at Carnegie Mellon University’s exhibit “Inventing Shakespeare: Text, Technology and the Four Folios” where I unsteadily approached the metal creature with both deference and awe.

Hinman’s research led to the 1966 publication of The Norton Facsimile: The First Folio of Shakespeare, whereby the scholar picked through the material dross to produce a Platonist fiction, a “corrected” work that reproduced an ideal version of the first folio which appears like none of the 233 copies of that book which still survive, or presumably of the 750 that constituted that initial 1623 print-run. More interesting than that edition itself was Hinman’s observation that none of the folios themselves were identical, that their uniformity when compared to manuscripts was superficial: each book is its own material individual. What the Hinman Collator represents to me is an understanding of literature which is estimably physical, that locates the grandeur of the written world not in some abstracted, transcendent, ineffable place, but very much in the material realm, in the body.

Literature is a thing produced, revised, disseminated, and preserved through material means; even in the case of spectral electrons zapping down the circuitry in your smartphone, which are as material as the first folio’s paper made of rags and oak gall ink. The Romantic poet Samuel Taylor Coleridge had a point when he claimed that Shakespeare’s writing emerged from the “unfathomable depths of his own oceanic mind,” but it was also produced by his ink-stained hand, by the contortions and positions of his performers, and the judicious decisions of his printers laboring in a hot and loud shop “at the sign of the Half-Eagle and Key in Barbican,” as F.E. Halliday wrote in his biography of Shakespeare.

My own materialism, fervent though it may be, veers into a type of wooly, incarnational mysticism that I imagine would be anathema to my more sober Marxist friends, but for me the book very much is the thing. I’ve had a few opportunities to actually touch the crinkled, brown paper of a first folio, the fine threads of the rendered rags which compose the individual pages visible and slightly textured to the touch, the individual fraying of faded black letters indicative of the sorts wearing down printing after printing. I’ve been able to turn the page of a first folio to the frontispiece of MacBeth at Lehigh University’s special collections, and to slowly paw through Carnegie Mellon University’s first folio and linger over lines like “We are such stuff as dreams are made on” from The Tempest in its earliest printing. At the risk of sounding sentimental, there are many things that go through one’s mind, not least of which is a sense of reverence for the sterling craftsmanship of what was still a mass-produced object; startling to consider when most of our contemporary books will transform into an acidic pulpy mass before the end of the century. Skill is why the book survived, why people wanted to pass it down, why so many still remain, especially when compared to other books from the time period. I’ve worked in the archive with sixteenth-century books where there is only one remaining copy, far fewer than the first folio’s 233 extant copies. Monetarily, these are worth far less than a folio, and the librarians scarcely paid me any attention, even though I could have suddenly lost my mind and began ripping pages and eating them.. That’s because nobody cares about Thomas Crashaw, but Shakespeare is Shakespeare. Maybe initially the binding and pages and cover, the thread and paper and leather, can explain the endurance of the folio, but it’s fair to say that if we think of a folio as a material object, then it’s certainly a relic, too. By definition, all relics are physical, and if the relic is where matter finds its apotheosis, then it’s hard not to see the folio as a sacred object.

Consider a poetics that acknowledges how embodied literature is, where reading and writing are as much of the body as they are of the spirit. Literature is more material than otherworldly, for in the form of the book—whether hand-written or printed, on a computer screen or even just held within the matrix of neurons and their synapses which is the human brain—we experience the Word become flesh, or at least paper. Which is why on this four-hundredth anniversary of Shakespeare’s first folio, that tremendously important physical book that was as responsible for the invention of the Bard as the words within it, it’s worth meditating on the physicality of his literature. There were other folios (that name, incidentally, referring to the full size of the paper used, which is simply folded in half, as opposed to quartos which are folded four times, octavos folded eight times, etc.) of Shakespeare, a second edition in 1632, a third in 1663, and a fourth in 1685. Then the innumerable editions of his complete works over the centuries; today there is The Norton Shakespeare with its bible paper and its mottled jester on the cover, The Riverside Shakespeare with its soft, effeminate portrait of the author on the front, The Arden Shakespeare and the Complete Works prepared for the Royal Shakespeare Company, blessed with the imprimatur of Captain Picard and Magneto themselves. There are innumerable editions of cheap paperbacks and deluxe coffee-table books, open-source academic websites and annotated scholarly works. But the folio was the first, and the first is the one that matters. As Jonathan Dollimore noted in the introduction to his landmark Political Shakespeare: New Essays in Cultural Materialism, “culture does not (cannot) transcend the material forces and relations of production,” and this anniversary of the first folio allows us to consider something wholly more important than a mere man dead for four centuries.

The folio is record of breath and heartbeat rendered into the idiom of rag and ink.

What must be remembered is how much of what made the folio important—why it had such a high print-run and why so many copies survive—is due to economics. Jaggard’s folio was intended to be a keepsake, an expensive keepsake, and its massive nine-hundred pages was unprecedented in theatrical history. Lacking in prestige as a form, at least compared to the elevated modes of epic and lyric poetry, plays had previously only been published in hastily set and cheaply printed quartos, small pocket-sized paperbacks which could make an author or a troupe some money should the theaters be closed due to the plague or Puritans. Slightly under half of Shakespeare’s plays were published during his lifetime in that format, though they are frequently radically different from the “authoritative” versions in the folio (Lear lives, Hamlet says “To be or not to be, aye, there’s the rub”). The 1623 folio indicated that those who mattered believed that the posthumous Shakespeare warranted such a costly volume. Well-set and well-bound with a handsome engraving of the author by the Flemish artist William Droeshout and a host of valedictory poems by poets like Ben Jonson which functioned like blurbs, the investors who funded this initial printing felt that it was worth the financial risk. The upshot of this was two-fold, for the appearance of the folio also helped to manufacture Shakespeare, in the very literal sense of preserving almost half of his plays that had never before appeared in print as quarto or otherwise, as well as announcing the author himself as a mind worth preserving.

For that reason, Shakespeare’s first folio remains a touchstone of bibliography, a book fabled for both its importance and its price. Monetary value seems a gauche criterion to judge a book by, yet it’s inescapable when it comes to the folio. Only a handful of printed books—as opposed to hard-lettered manuscripts, which can often occupy an entirely different stratification of expense—really compare in price to a complete folio. A first edition of James Audubon’s gorgeously illustrated The Birds of America, published serially between 1827 and 1838, sold at Christie’s for 10.27 million dollars in 2010; The Bay Psalm Book printed in Boston in 1640 went for over 14 million dollars three years later. By comparison, should a Gutenberg Bible come up to auction anytime soon, it’s estimated it could fetch a cool 35 million. Of course, Shakespeare’s is the most expensive work of literature ever sold, even if scripture and ornithology have pocketed more:.

As evidence of the enchanted aura which the physical book possesses, consider how libraries so often advertise their possession of a folio, a physical book unnecessary at this point for generating new scholarship. The J.P. Morgan Library owns two, both on display at its Midtown Manhattan location, as does the staid Newberry Library, just outside of the Loop in Chicago. The Victoria & Albert Museum has three behind its rococo façade on Cromwell Road in London’s tony Kensington, while the Huntington in sunny Los Angeles has four, and the red-brick modernist monolith that is the British Library has five. The dreaming spires of Oxford and Cambridge each have four, while the University of London has only one. In the grand patriotic war of Shakespeareana between his native country and the United States, the latter firmly outpaces the former—fifty remain in the United Kingdom, while there are 149 in the United States. The granite lions of Fifth Avenue guard six at the New York Public Library, the Gilded Age Boston Library in puritan Copley Square has one, and the Free Library of Philadelphia has a copy annotated in John Milton’s hand. Even the Buffalo & Erie County Public Library has one. Thirty-one institutions of American higher education are in possession of a copy, including all the usual Ivy League suspects (not Dartmouth though, or Cornell). For sheer chutzpah, nothing compares to the Folger Shakespeare Library in terms of uncorrupted, undistilled, unadulterated Bardolatry, a veritable secular temple to Shakespeare with an astounding 82 copies, the single largest cache in existence, and 32 more than the entirety of Great Britain. All of those folios, purchased with money gained from Standard Oil and named for a distant relation to the coffee fortune, housed in a gorgeous silvery-grey art deco bunker on 2nd and East Capitol in Southeast Washington DC, only two blocks from the Capitol Building whose giant eggshell dome dominates your view after departing from the dark, cloistered Tudor environs of the library within.

But while the first folio may be a symbol, and an icon, and a relic, it is a material object before anything else, and it reminds us that all literature must be written on the body and through the body and with the body. Shakespeare’s plays are a static reminder of what was once physical, of the movement of performance, of blocking and staging and an actor’s comportment. The folio is record of breath and heartbeat rendered into the idiom of rag and ink. It is not some absolute and perfected soul, but rather each folio is a different body, linked only in their diverse and beautiful imperfections, gesturing towards that imagined spirit of Literature to which we aspire but never reach. Shakespeare’s grave in Stratford infamously reads that “cursed be he that moves my bones,” but he was never really buried there, not really. The actual tomb of the playwright can be found in these first folios, where with material accuracy and anatomical precision, the printers declared that Shakespeare had been “cured and perfect of their limbs.” If you seek his body, here it is.

]]>
https://lithub.com/in-praise-of-the-tangible-sacredness-of-the-printed-word/feed/ 0 229256
What To Do When Your Period Comes… In Space https://lithub.com/what-to-do-when-your-period-comes-in-space/ https://lithub.com/what-to-do-when-your-period-comes-in-space/#respond Wed, 08 Nov 2023 09:50:40 +0000 https://lithub.com/?p=229049

You may have heard a story about women astronauts of the 1978 NASA class being given an absurd amount of tampons by clueless techs. It goes like this: Kathy Sullivan and Sally Ride, both members of the ’78 class, are asked to check a hygiene kit for women in space. Ride begins pulling out a series of tampons fused together in small sealed packages, sort of like links of sausage. And they just keep coming. And coming. Sullivan later recalled that “it was like a bad stage act. There just seemed to be this endless unfurling of Lord only knows how many tampons.” When Ride finally got to the end, the male engineers asked, “Is one hundred the right number?” Sally Ride, with the controlled emotions of a natural astronaut, politely responded, “you can cut that in half with no problem at all.” This is an old tale but was widely circulated online in the late 2010s, at one point featured in a popular musical comedy routine by Marcia Belsky titled “Proof That NASA Doesn’t Know Anything About Women.” It’s a great story, and it isn’t wrong exactly, but it may be missing some context that seriously alters its meaning.

Here’s the thing: Dr. Rhea Seddon, the only combination medical doctor, astronaut, and period-haver in the class of ’78, helped make the decision about how many tampons to include. According to a 2010 interview, the large number of tampons was a safety consideration. As she said, “There was concern about it. It was one of those unknowns. A lot of people predicted retrograde flow of menstrual blood, and it would get out in your abdomen, get peritonitis, and horrible things would happen.”

According to Seddon, the women were skeptical of the concerns, and their preference was not to treat it as a problem unless it became a problem. But she was involved with the final decision made with the flight surgeons, and according to her:

We had to do worst case. Tampons or pads, how many would you use if you had a heavy flow, five days or seven days of flow. Because we didn’t know how it would be different up there. What’s the max that you could use?

Most of the women said, “I would never, ever use that many.”

“Yes, but somebody else might. You sure don’t want to be worried about do I have enough.”

In other words, the story may have been less about idiot male techs and more about the NASA approach of solving all problems with more equipment. As Seddon remembers it, they decided to take the maximum amount they imagined a woman with a heavy period could need, multiplied that by two, and then added 50 percent more.

This would be typical NASA behavior—if you read the 1,300-page long Human Integration Design Handbook, which we unfortunately have, you will encounter the word “maximum” 257 times, as on page 604, which contains a remarkably detailed treatment of Number 1, including what you might call a peequation,

VU = 3 + 2t,

where VU is the maximum total urine output in liters per crewmember, and t is the number of days of the mission.

The story may have been less about idiot male techs and more about the NASA approach of solving all problems with more equipment.

In the case of tampons, the excessive concern may have been appropriate. Lynn Sherr, longtime journalist, friend to a number of female astronauts, and also Sally Ride’s biographer, said the first woman who ever menstruated in space had problems with “leakage.” Remember, space is awful. There is no gravity to pull fluids in a generally downward direction. Blood, through a process called capillary action, tends to climb out. 

According to Sherr, that anonymous astronaut elected to wear a tampon as well as a pad.

Women astronauts today mostly favor hormonal birth control. These may have to be reworked a bit for a long trip to deep space, since most Earth women don’t require birth control pills that are shelf stable for three years in the presence of space radiation. On a first Mars trip, where the major focus is survival, pregnancy would be a disaster. On any attempt at permanent settlement, pregnancy will be one of the goals.

We apologize for slightly ruining the tampon story, but look—if you want to hear about NASA engineers not understanding female anatomy, better options are available. Take a look at the urination devices they originally proposed for women, of which Seddon once said, “Borrowed from chastity belt designs for sure!”

In what you might call a literal example of structural sexism, the engineers were trying to duplicate the condom-shaped system used by male astronauts. As Amy Foster wrote in Integrating Women into the Astronaut Corps, “it seems that none of the male engineers assigned to this project felt comfortable enough to consult a woman first.” The female-anatomy edition never flew, and ultimately women wore a version of what we now call a MAG: maximum absorbency garment. Basically, adult diapers. MAGs are now the standard clothing for situations like launch and landing, where astronauts can’t just get up to use the potty.

This is true for men as well, and it’s a blessing. In order to use the old system, men were required to specify whether they needed a small, medium, or large. The choice between being honest with the medical attendant and potentially wetting yourself while strapped in for launch was apparently Scylla and Charybdis for some. According to Michael Collins, among Apollo-era astronauts, male egos were spared by referring to small, medium, and large as “extra large, immense, and unbelievable.”

This is perhaps not the best “equality helps everyone” story, but it may be the weirdest.

__________________________________

From A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through? by Kelly and Zach Weinersmith. Copyright © 2023. To be published by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC.

]]>
https://lithub.com/what-to-do-when-your-period-comes-in-space/feed/ 0 229049
Magical and Essential: On the Mineral and Metallic Bases of Our World https://lithub.com/magical-and-essential-on-the-mineral-and-metallic-bases-of-our-world/ https://lithub.com/magical-and-essential-on-the-mineral-and-metallic-bases-of-our-world/#respond Wed, 08 Nov 2023 09:30:36 +0000 https://lithub.com/?p=229002

Featured image: Francesco Mocellin via Wikimedia Commons

Nestling alongside strange lunar plains, bright red lakes, mountains of salt and fuming volcanoes, the Salar de Atacama is a place of rare, discomforting beauty. On its fringes are flamingos, guanacos and vicuñas, the wild cousins of the llama and alpaca. But as you journey inwards, towards what scientists call its nucleus, life begins to disappear. Here, within a barren desert is an even more barren expanse.

This is the driest place on earth, save for some benighted parts of Antarctica. You notice it the minute you arrive: you feel it on your skin, in your throat and on your cracked lips. In my time there the humidity did not exceed 12 per cent, which is perfect for a Finnish sauna but less ideal for biological life.

The technical explanation is that this desert of sand, stone and salt sits in what is termed a two- sided rain shadow. To its east is the Andes; to its west is the Chilean coastal range of mountains. The upshot is that rain is very, very rare. There are some weather stations that have never recorded a single drop, though in some parts, including the Salar, there are very occasional torrential showers. Locals call them the Bolivian winter, though paradoxically they usually fall in the summer.

The Salar itself is a salt lake, though it doesn’t look much like the kind of salt lakes you might have in your mind’s eye: those massive, flat expanses of white like the Great Salt Lake of Utah or the Salar de Uyuni in Bolivia. The Salar de Atacama is, by contrast, brown and slightly scaly. The brown color is in fact a very thin coating of sand, which has blown out from the neighboring desert on to the surface here and clings to the salt. The scaliness is a function of the fact that the salty surface is still slowly, imperceptibly growing, with new stalks of salt reaching like fingers towards the sky. Other salt lakes are white and flat because the rain tends to wash away the sand and dissolve the scaly claws before they can form. But since it doesn’t really rain here, the fingers and crust slowly carry on growing.

No other element has quite the same combination of lightness, conductivity and electrochemical power.

At one point I strode out on to this crispy surface and soon realized I’d made a mistake. You are supposed to wear tough gloves if you go near the unbroken salt, since those fingers and edges are sharper than a chef’s knife. The surface is so uneven and unpredictable that it is hard to pick your way through without occasionally stumbling, and if you stumble and use your hands to break your fall…after five minutes of unsteadily picking my way through the thicket of salt I paused, looked down at my fingers, visualized what they’d look like if I fell and turned back. With each new stride the salt crackled and snapped at my feet. Strange echoes twanged through the surface like the thawing ice on a Nordic lake. These sounds were all the more spooky because of what I knew lay a few meters beneath the surface: an utterly gigantic underground reservoir of concentrated saltwater. This saltwater, this brine, is the business; it’s what we are here for. It is a deep, rich solution of many different salts of sodium, magnesium, potassium, boron and, yes, lithium.

There is a certain empirical logic that secures lithium’s place as one of the six key members of the Material World. This is a magical metal: alongside hydrogen and helium it was one of the three primordial elements created in the Big Bang, making it one of the oldest pieces of matter in the universe. No other element has quite the same combination of lightness, conductivity and electrochemical power. No other metal is quite as good at storing energy. So light it floats in oil, so soft you could cut it with a kitchen knife but so reactive that it fizzes and bangs when it makes contact with water and air, it is one of those materials you don’t ever see in its elemental form outside of a chemistry lab. And this reactivity helps explain why lithium is at the heart of the most powerful batteries, and therefore the heart of the twenty-first-century world.

If we are to eliminate carbon emissions and phase out fossil fuels in the coming decades we will have to electrify much of the world (less oil but more copper). We will need to build many more wind turbines (steel, silica and copper) and solar panels (copper and metallurgical silicon), not to mention hydroelectric dams (concrete). But none of this will do the trick unless we have a way of storing that energy. We will need to store it for short periods to deal with the inherent intermittency of renewable sources of energy, such as the sun and the wind. And we will need to store it so that road vehicles can get from A to B without burning fossil fuels.

While batteries do not provide all of the answers, they are a large part of that missing link that might just get us there. And while there are many other chemicals inside batteries—of which, more later—there is no beating lithium when it comes to its lightness and its ability to store energy. As science writer Seth Fletcher puts it, “The universe hasn’t given us anything better.”

The reason we are back in Chile once again is that in much the same way as there is nowhere else on the planet with quite so much copper, there is also nowhere else on the planet where we can lay our hands on quite so much lithium. The Salar de Atacama is the single biggest source of lithium anywhere.

Quite how it came to be here is one of those mysteries we are only beginning to fathom, but as things currently stand, the most compelling explanation is as follows. Think of the Salar as a kind of cauldron, with the Andean volcanoes on one side and another smaller set of hills on the other. Water comes down from the Andes in a number of different rivers, which run down through deep gorges—quebradas—towards the basin. Along the way this water picks up microscopic amounts of those unusual minerals in the Chilean earth, but when it hits the bottom of the valley it has nowhere to go. Trapped inside the cauldron, the water percolates into the gravelly ground, where—this being one of the driest regions on earth—much of it evaporates.

Only when you imagine this process happening over millions of years of deep time—the river water with its tiny quantities of leached volcanic minerals coursing into the basin, soaking through the alluvial fan and then evaporating under the punishing South American sun—do you start to comprehend how this enormous salt lake came to be. Millennium after millennium the water evaporated, leaving that concentrated liquid cocktail of salts. Millennium after millennium the sodium chloride, which precipitates faster than the other ingredients in this brine, formed into a crust on the surface: that crust I briefly stumbled over. In the oldest sections furthest from the rivers, the salt folded into an entire mountain range, the Cordillera de la Sal, as the tectonic plates beneath this landscape continued to crunch together. The process is probably still happening today, but too slowly to be in any way discernible.

And there you have it. The Salar is undoubtedly striking to behold, but what is more mind-boggling is what lies beneath it. In parts, the salt is at least 3 miles thick. In parts there is only a thin skin and beneath it vast quantities of ancient brine, which has sat beneath the surface, soaking underground like a sponge, for at least 3 million years. We tend to think of water as being constantly in motion, whether in the sea or our rivers or in the droplets that evaporate from lakes into clouds which rain down and complete the cycle. But the water here has been imprisoned in this dark, salty dungeon—still and inert—since well before the dawn of humanity.

That the batteries in mobile phones, laptops and electric cars are made in part from this ancient liquid is another one of those paradoxes you are already familiar with from the Material World: the very old giving birth to the very new. But there is nonetheless something dizzying about it. As you watch the briny water gushing out of the pipes here it is hard to get your head around the fact that this is the first time it has seen the light for millions of years. Or that it will soon be entombed again inside a battery in a contraption on the other side of the world.

Two companies mine the lithium in the brine. There is Albemarle, which started out as a paper manufacturer and chemicals firm before doubling down on lithium as well, and SQM, that same chemicals company we encountered back in the salt section, which mines caliche and turns it into fertilizers elsewhere in the Atacama.

There is no beating lithium when it comes to its lightness and its ability to store energy.

The way this type of lithium mining works is relatively simple. The ancient brine is pumped out from under the salt crust, from brine wells located all over the Salar. It is channelled into gigantic ponds where the water is evaporated away. It is a slow process taking many months: first the sodium chloride precipitates, then the remaining brine is channelled into another big pond where the potassium salts precipitate, then into another evaporation pond where the magnesium salts are removed. Eventually, after well over a year, that brine that left the underground reservoir as a pale blue liquid has been concentrated into a yellow-green solution, almost as bright as a neon highlighter. At this stage, it is about 25 per cent lithium chloride, though the green color actually comes from the boron still left in the solution.

You might have noticed that not only is this process rather straightforward, it’s also precisely the same technique the Phoenicians used when making salt in Ibiza thousands of years ago and artisan producers still use today to turn seawater into fleur de sel. Only here, alongside the sodium chloride is lithium salt: lithium chloride. Really, the main difference is its scale: the evaporation ponds turning out Mediterranean salt are measured in meters whereas the ones here in the Salar are measured in kilometers.

SQM, which these days extracts most of the lithium, became one of the world’s biggest lithium producers almost by accident. It originally started pumping out brine from the Salar in the 1990s not so much to produce lithium as to produce potassium: potash. The lithium was an interesting by-product. Indeed, up until recently, no one paid all that much attention to this element, which unlike the other materials in this book played little more than a passing role in civilization.

Perhaps its most important use was as a pharmaceutical: lithium became such a popular treatment for bipolar disorder and depression that it entered the cultural lexicon, featuring in songs by Evanescence and Nirvana. Indeed, it is so effective at subtly altering one’s mood that some have argued it should be added to drinking water in much the same way as many countries add fluoride to help dental health. It has a small but all-important role in new nuclear power technologies. Lithium turns out to be an essential coolant for molten salt reactors and is the main way of breeding the tritium we will need if we ever get round to achieving mainstream nuclear fusion. There are a few other uses: it can help strengthen glass (lithium being one of the first elements Otto Schott added to his melts in the nineteenth century). It plays a role as an alloy in certain metals and its slipperiness means lithium compounds make for excellent lubricants, as well as improving the look and wear of ceramics.

All of which makes lithium something of an outlier in the Material World. All of the other materials we have encountered thus far have been essential parts of our lives for generations if not centuries. But had this book been written a few decades ago lithium would likely not have made the shortlist at all. Its place as one of the essential substances in our lives is the fruit of a long-standing challenge, a little like the rediscovery of the recipe for cement or the invention of the solid state semiconductor. The quest to create a strong, powerful, resilient battery was a century in the making.

__________________________________

Excerpted from Material World: The Six Raw Materials That Shape Modern Civilization by Ed Conway. Copyright © 2023. Available from Alfred A. Knopf, an imprint of Knopf Doubleday Publishing Group, a division of Penguin Random House, LLC.

]]>
https://lithub.com/magical-and-essential-on-the-mineral-and-metallic-bases-of-our-world/feed/ 0 229002
Capturing Process and Industry in America: On the Photography of Christopher Payne https://lithub.com/capturing-process-and-industry-in-america-on-the-photography-of-christopher-payne/ https://lithub.com/capturing-process-and-industry-in-america-on-the-photography-of-christopher-payne/#respond Fri, 03 Nov 2023 08:40:37 +0000 https://lithub.com/?p=228875

Christopher Payne was en route through Brooklyn on his way to the MTA Overhaul Shop in Coney Island, where they rebuild and maintain subway cars. As he passed storefronts, bodegas, and restaurants, he commented, “‘STEAKS, CHOPS, SEAFOOD’—you don’t see that on the signs for diners anymore.” Payne is renowned for his photographs documenting industry in America. When he creates images of things being produced, he feels the urgency of knowing that all manufacturing processes change and disappear over time. He conveys the power and beauty of making things. All sorts of things: Steinway pianos, Whirlpool washing machines, Kohler urinals, Airbus planes, and electric vehicles shuttling down the assembly lines at Ford and Rivian. His focus ranges from traditional processes serving niche markets to ultramodern technologies.

Payne had photographed in the MTA Overhaul Shop several times already. In the cavernous skylighted space, he had the swagger of someone who understands the work done there, which won the respect of the workers. They knew from previous shoots the exactitude and precision—the eccentricity—he exhibits when composing a photograph. In his steel-toed boots and hard hat, Payne stalked the aisles lined with trains like a museum curator searching for treasures to put on display. Today his mind was set on a forty-ton subway car. He wanted to document the moment when the train is hoisted into the air to facilitate work on its undercarriage. Payne envisioned a moment when the elevated car would align with the car behind it in a way that would be deeply satisfying. This moment of geometric and compositional sublimity had eluded him so far. He is a perfectionist.

There is nothing loose or improvisatory about Payne’s work. As we entered the shop that morning, he said, “We’re going to get medical with this—like, surgical.” He will return to the same location five or even ten times in pursuit of an imagethat is escaping him or to redo an image he thinks he can do better. That’s what he was up to this day in Brooklyn. He set up his tripod and, as he was shooting, he directed the men moving the car into position to lift it a few inches higher here or drop it a few inches there. They endured several rounds of his requests because, as much as he admires the tremendous skill they bring to their labors, they seemed to admire the obsessive, sometimes baffling perfectionism he brings to his art. At one point, as he kept honing the exact composition he wanted, he said, “I don’t know if I am chasing something that is unattainable.”

Red/blue editing pencils before dunking in blue paint. General Pencil Company, Jersey City, New Jersey Red/blue editing pencils before dunking in blue paint. General Pencil Company, Jersey City, New Jersey

It was thrilling to see the colossal subway car handled like a toy. Scale plays a major role in Payne’s work. Pencils in a factory in Jersey City look monumental, and a row of airplane fuselages on an assembly line in Wichita, Kansas, looks tiny. He shoots behemoths like nuclear submarines, wind turbines, and printing presses with the same flair and eye for detail he brings to shooting tiny fiber optics and computer innards. The steel-and-copper hatch of a nautical submarine could be, at first glance, a watch component. One of his most delightful photos shows a man inside a huge New York Times printing press, engulfed by the tangle of wires, cables, and gears he is cleaning. Payne loves seeing humans inside machines.

Circular forms appear regularly in Payne’s pictures. A worker’s tiny legs peek out from below the huge steel sunflower of a jet engine. Rows of massive wheels are lined up in a locomotive factory in Fort Worth, Texas. Hundreds of spools of wire are mounted on the spokes of a gigantic orange wheel in the Nexans high-voltage subsea cable plant in Goose Creek, South Carolina. Chartreuse golf balls whirl- ing in a vibrating buffing chamber at the Titleist factory become graceful minimalist sculptures. One imagines him walking onto a factory floor filled with machinery and feeling the same jolt of inspiration that Monet once felt gazing at water lilies and van Gogh felt in a field of haystacks. The manufacturing world is Payne’s muse.

In 2010, a yarn mill in Maine caught Payne’s interest. The once-booming textile industry in the United States had shrunk dramatically in recent decades, and one of the main obsessions that fuels Payne’s art is the desire to capture traditional manufacturing processes before they disappear. The mill became the first of more than twenty that Payne documented throughout New England. One morning he received a call from the owner of the S & D Spinning Mill in Millbury, Massachusetts, a place where Payne had spent some time. The owner said, “You might want to come up today. We’re running pink.”

Wool carders. S & D Spinning Mill, Millbury, Massachusetts Wool carders. S & D Spinning Mill, Millbury, Massachusetts

Prior to that, whenever Payne had been on-site, they were running black, white, and gray wool. Payne, who lives in upper Manhattan, still chuckles when he recounts thinking to himself, “Do I want to give up my parking space right outside my apartment to drive three and a half hours to the mill?” Of course he did. He made one of his iconic photos that day—a deliriously pink sea of unspun fuzzy wool fiber stretched across a bank of gray rollers cascading down from the ceiling. The interlocking lines and angles formed by the grid of rollers, ladders, fencing, and vividly magenta gossamer fibers form a rhythmically harmonious composition that would hold its own against a rigorous Mondrian-esque abstraction if it weren’t for the unruly wool puffs wafting about on the floor and webbing down from the rafters. This fiber would eventually be used for hardware-store paint rollers. Payne is always ready to drop everything to go to a factory in pursuit of a color or moment in the industrial process that he has been chasing.

Even when the product being manufactured isn’t colorful, hints of cobalt blue, sunny yellow, and fire-engine red pop up in Payne’s photographs, thanks to factories using these primary hues as warnings and decorative accents. He waited months to get the spaghetti strands of blue pastel at the General Pencil factory in Jersey City. Gloved hands gently hold the soft material atop a stack of wooden boards cut with ridges to shape the strands. The scene is rendered with Payne’s classically cinematic Rembrandt lighting evenly illuminating the hands while letting the background fall into darkness. There is an air of timelessness to the image. Payne says it is hypnotizing to watch someone do a repetitive motion. When he was in one of the textile mills, he spent the better part of a day making a portrait of a man doffing a large spool of wool roving (wool fiber that has been processed but not yet spun) because he wanted to catch the moment of peak elegance.

This is usually the aim when Payne is photographing workers. He will labor over a portrait with the same fierce attention to minute shifts in position and lighting that he brings to his still-life images, trusting that he will have a chance to remake a picture due to the repetitive nature of assembly-line and factory work. The task will be repeated. He wants to illuminate and celebrate the skills of the workers and to honor their craftsmanship. There is no excuse for not getting it right. A tour through the Steinway piano factory in 2002 started Payne on his mission to document industry in America. He was overwhelmed by the beauty and delicacy of the artisans’ work and found himself thinking about it for the next decade. He eventually gained privileged access to the factory and began what would become a three-year project to show how pianos are made. He found it to be a “very meditative place,” and says, “When I saw them bending the wood for the piano around the rim press, I said, ‘Oh my God, that is the first step in the creation of a concert grand that will eventually end up in performance halls around the world,’ and I almost cried.

This is when the wood is transformed into the unmistakable silhouette of the piano. Before that, it is just planks.”

PEEPS Marshmallow Chicks cooling on a conveyor belt before packaging. Just Born Quality Confections, Bethlehem, Pennsylvania PEEPS Marshmallow Chicks cooling on a conveyor belt before packaging. Just Born Quality Confections, Bethlehem, Pennsylvania

The smell of wood was everywhere. Much of the work is done by eye with chisel in hand. A “belly man” literally lies on top of the sound boards on a table cut out in the shape of a piano as he does his work. Payne’s grandmother and mother taught piano, and his father was a classical musician. He feels this has influenced his photographic work. He originally trained as an architect and worked as one for twelve years. When the Recession hit in 2008, he found himself at a cross- roads, realizing that he preferred being in actual physical spaces to drawing the plans for future buildings. He turned to photography full-time, crediting his years of translating three-dimensional spaces into two-dimensional drawings with giving him a deeper understanding of form and function.

The biggest challenge Payne faces is an unusual one for an artist. He is obsessed with process.

I first met Payne when Bonni Benrubi, his gallerist at the time, showed me his stunning photographs from the Steinway factory in the spring of 2012. We published those images in the New York Times Magazine, where I have been the director of photography since 1987. Since then, I have enjoyed working with Payne on numerous projects. We commission him because of his singular ability to make gloriously monumental photos that illuminate what he refers to as the “grandeur and sublimity” of industrial processes.

Three of the most memorable photo essays we’ve published—the textile mills, the pencil factory, and even the New York Times printing plant—were self-assigned art projects that Payne either brought to us after they were complete or asked us for help with to gain access to a facility; he had no promise of publication upon their completion. Payne, who sold newspapers in Boston when he was a teenager, desperately wanted to shoot inside the massive Times printing plant in College Point, Queens. After we granted him access, he visited the plant more than thirty times, often into the wee hours of the morning, to get the best images of the presses running and the press operators at work. Sometimes he came away empty- handed if things didn’t align visually in the way he hoped they would. This deep engagement with his personal projects gives him the granular knowledge of the manufacturing process he needs to make the formally beautiful and informationally meaningful images he seeks.

Warp yarns feeding a Jacquard loom for the weaving of velvet upholstery. MTL, Jessup, Pennsylvania Warp yarns feeding a Jacquard loom for the weaving of velvet upholstery. MTL, Jessup, Pennsylvania

The biggest challenge Payne faces is an unusual one for an artist. He is obsessed with process. When he is photographing inside a factory, there is a constant inner tug-of-war between his desire to make the most beautiful photo possible and his desire to show how something works. He says, “I struggle with the burden to show process. To convey useful information as well as beauty. It can’t just be beauty. It has to have meaning.” It is a self-imposed burden. We published the photo essay of the Times printing plant as a special section of the broadsheet. A selection of the photographs he made now hangs in the Times building in Times Square.

Payne cites as influences the work of Andreas Feininger, the photographer who covered industry for Life magazine in the 1940s and 1950s; Alfred Palmer’s factory portraits during World War II for the Farm Security Administration; the industrial photographs of Ezra Stoller (who was known primarily for his architectural commissions); and the pictures Joseph Elliott made at the Bethlehem Steel plant in the 1990s. Payne has grabbed the baton and run with it. He shares the appreciation of sculptural forms evident in Bernd and Hilla Becher’s seminal documentation of disappearing industrial architecture in Germany, of objects such as cooling towers, gas tanks, and grain elevators. The big difference between their photography and Payne’s is that they clearly had a formal agenda and Payne’s is both formal and humanistic. Payne also looks to Vermeer’s paintings for his portraiture because, he says, “I love the soft side light and the way his pictures are architecturally composed and ordered, with everything in its place for a reason.” Payne’s work will one day resonate in the way Berenice Abbott’s Changing New York photos from the 1930s do today. They will serve as historic records.

To succeed, photographers need to be opinionated. Payne’s photographs declare with clarity and passion his belief that American manufacturing is to be treasured and valued and the workers respected and honored with our attention. The hard labor of these workers has been documented by one of the finest documentary artists of our time. This book should be the topping-out ceremony that occurs when the highest feature on a tall building is attached to celebrate the end of construction. After all the work Payne has done in magnificently rendering the toil of the workers and the beauty of industrial processes, he should be able to step back to survey the breadth of his achievement, but as I write this essay, I know he is still trying to gain access to places he hasn’t been able to get into yet—a jet engine test site, a high-tech pharmaceutical lab, and a space capsule he has been dreaming about. There is always something more to photograph.

__________________________________

Made in America by Christopher Payne

Excerpted from Made in America: The Industrial Photography of Christopher Payne. Foreword by Kathy Ryan Copyright (c) 2023 Abrams Books. Used by arrangement with the Publisher. All rights reserved.

]]>
https://lithub.com/capturing-process-and-industry-in-america-on-the-photography-of-christopher-payne/feed/ 0 228875
“AI, Ain’t I A Woman?” On the Blindness and Limitations of Artificial Intelligence https://lithub.com/ai-aint-i-a-woman-on-the-blindness-and-limitations-of-artificial-intelligence/ https://lithub.com/ai-aint-i-a-woman-on-the-blindness-and-limitations-of-artificial-intelligence/#respond Wed, 01 Nov 2023 08:50:50 +0000 https://lithub.com/?p=228562

I sensed an opening. Research papers could reach academics and industry practitioners focused on AI, but I needed something more to reach everyday people. I also needed to reach decision-makers like elected officials who might be seduced by the promises of AI to bring increased efficiency without being aware of racial, gender, and other types of bias. Did the government officials in India exploring the adoption of the Aadhaar system know about the potential for bias in the biometric solutions being offered as answers for efficient distribution of government resources and persistent identification? Did they know algorithmic bias might deny benefits to the very people they sought to help? What about the police departments adopting facial recognition technologies? What did they know about algorithmic bias, if anything? I knew I couldn’t leave it to the companies selling these systems to reveal their flaws. There was no incentive to put technological shortcomings in a sales pitch. I needed to humanize the harms and biases of AI systems and bring a perspective that tech companies were likely to shy away from. How might I use my knowledge to help people see beyond the headlines now being written about my work, “Facial Recognition Is Accurate, If You’re a White Guy,” and feel the impact on a specific person?

I decided one way to humanize AI biases and make the topic more mainstream than an academic paper was to test the faces of the Black Panther cast. Since my research had shown that the systems I tested worked worst on the faces of darker-skinned females, I decided to focus on the faces of the women of Wakanda: Lupita Nyongʹo as Nakia, Letitia Wright as Shuri, Angela Bassett as Queen Ramonda, and Danai Gurira as fearless General Okoye. I brought on Deborah Raji as my research intern to carry out a small-scale audit running the Black Panther cast’s faces across the AI systems of five companies. This exploration became known as the Black Panther Face Scorecard project. The project revealed some commonalities with my own experience. Like me, some of their faces were misgendered, not detected at all, or in some cases mis-aged. Angela Bassett, who was in her late fifties at the time of the photo, was estimated by IBM’s system to be between eighteen and twenty-four years old. (Maybe not all algorithmic bias was that bad.)

My collection of failure demonstrations provided a counterpoint to the celebrations that accompanied technological advances.

The results were amusing. The Black Panther Face Scorecard drew smiles from colleagues and visitors from member companies of the MIT Media Lab. These fictional characters, played by actors whose faces had reached billions of people, still felt safely removed from everyday life. While more women were rocking shaved heads, not many people were walking around with vibranium undershirts or bracelets with ammunition to keep superhero relatives safe. At least, this wasn’t happening in my social circles.

The performance metrics on the women of Wakanda kindled my curiosity. How would these AI systems work on the faces of not just fictional dark-skinned women but iconic women of today and yesterday? How might AI read the faces of highly photographed women like Michelle Obama, Serena Williams, and Oprah Winfrey?

Screenshot of Oprah Winfrey image misclassification, from the visual poem “AI, Ain’t I A Woman?” Youtu.be/QxuyfWoVV98?t=133.

And how would it do on historic figures like Sojourner Truth, who escaped slavery by buying her freedom and pushed for women’s rights and the abolition of slavery? I was also eager to try the faces of Shirley Chisholm, the first Black congresswoman, and fearless journalist Ida B. Wells. I searched online for popular, widely used images of these women, which Deborah Raji ran through systems that included IBM, Amazon, and Microsoft. When she shared the results, I was astonished.

Screenshot of Sojourner Truth image misclassification, from the visual poem “AI, Ain’t I A Woman?” Youtu.be/QxuyfWoVV98?t=39.

Looking at just the names with the results in a spreadsheet was one thing. Seeing the faces of women I admired and respected next to labels containing wildly incorrect descriptions like “clean shaven adult man” was a different experience. I kept shaking my head as I read over the results, feeling embarrassed that my personal icons were being classified in this manner by AI systems. When I saw Serena Williams labeled “male,” I recalled the questions about my own gender when I was a child (“Are you a boy or a girl?”). When I saw an image of a school-aged Michelle Obama labeled with the descriptor “toupee,” I thought about the harsh chemicals put on my head to straighten my kinky curls, until I decided to embrace my natural hair. And seeing the image of a young Oprah labeled with no face detected took me back to my white mask experience.

For a while, I tried to remain detached from my research findings, which indicated that all systems tested worked worst for dark-skinned females. The research touched on other groups that also warranted attention, like darker-skinned males and lighter-skinned females. With the latest exploration of women I admired, I had an opportunity to bring dark-skinned women like me to the center stage. I had the power to put faces to what might otherwise be story-less silhouettes.

My first instinct was to create an explainer video like the one I made for the “Gender Shades” research paper. Doing that was familiar and comfortable. It allowed me to show some of the outrageous results from the position of an analyst explaining how the results reflected misogynoir, the term coined by Dr. Moya Bailey meaning the ways Black women, specifically, are insulted or discriminated against.

After writing the draft script for an explainer video on these iconic women, I showed it to a teaching assistant in a film class I visited periodically and asked how I could improve it. “What motivated you to work on it?” he asked me.

“The research paper is the beginning of a conversation, but the results are abstract. I do not want to subtract the humanity of the feeling of being misgendered, being labeled in ways beyond your control. I want people to see what it means when systems from tech giants box us into stereotypes we hoped to transcend with algorithms. I want people to bear witness to the labels and peer upon the coded gaze for themselves.”

As I spoke, he nodded his head.

“Have you considered making a poem about this instead of a script?”

For years, there was a form of art I indulged in but kept largely hidden. I had notebooks and digital diaries filled with verses and phrases. Snippets of my poetry dwelled in shadowy places. I enjoyed writing, but it was mostly a private, vulnerable exercise: I’d intended to keep my poetry mostly to myself and a tight circle of sympathetic ears.

Truth was also in the business of sharing counter-demos to large audiences to demolish dangerous narratives.

When the sunlight warmed me awake the next morning, the following phrase sat in my head, capturing how I felt about witnessing the cultural impact of Serena Williams, Michelle Obama, and Oprah Winfrey walking in their paths:

My heart smiles as I bask in their legacies

knowing their lives have altered many destinies.

As I brushed my teeth and looked into a fogged mirror, more words came into focus:

In her eyes, I see my mother’s poise

In her face, I glimpse my auntie’s grace

As I ruminated on the work more lines came to me:

Can machines ever see my queens as I view them?

Can machines ever see our grandmothers as we knew them?

My poem “AI, Ain’t I A Woman?” was born. The piece held the emotions I had long suppressed. When I spoke the words of the poem aloud, my anguish and disappointment emerged. But for the full impact, the words needed to be paired with the images and disheartening labels that were slapped onto these iconic women by AI systems from leading tech companies. Part of what made the white mask demo more powerful than words alone was seeing me alter myself by donning a white mask to be made visible to a machine.

Until making the white mask fail demo, I thought of tech demonstrations as celebrations of what machines could do. If a demonstration included a failure, the demo gods had failed you. I thought of the way Steve Jobs, robed in a black turtleneck, not only talked about the possibilities of an iPhone but demonstrated the capabilities with carefully selected examples to tantalize onlookers and change the conception of what a cellphone could be. His words mattered, and so did seeing a simple gesture opening an application or switching screen views. Showcasing what his words meant completed the seduction. The Apple demos were a pathway into transforming existing beliefs about technology.

I was doing something similar but in the opposite direction. There were plenty of examples to show the possibilities of tech. I was collecting examples to show the limitations. My collection of failure demonstrations provided a counterpoint to the celebrations that accompanied technological advances.

The white mask failure I recorded was an example of what I call a counter-demo. But what exactly is a counter-demo countering? With the case of the white mask, I was providing a counter-narrative to the research and accompanying headlines lauding advances in computer vision. With “AI, Ain’t I A Woman?” I decided to record screencasts to create counter-demos. These demonstrations countered the supposed sophistication of AI systems being eagerly sold. I assumed commercially sold products from these companies would perform fairly well on most people’s faces if they were being sold to a wide market.

At the time, these companies had online demos of their AI product capabilities that were publicly available so anyone with some time, an internet connection, and a photo could upload an image and see how the demos worked. To make counter-demos, I screen recorded my visits to these websites and sat through loading animations of rotating wheels that preceded the display of results. Some included colored boxes that would be used to locate a head in an image. All had some type of description about what the uploaded images contained. When I uploaded an image of Sojourner Truth to Google’s system, it returned the label “gentleman.” Truth had fought to be treated on equal footing with a gentleman but was also vocal in saying that she too was a woman. Her famous 1851 “Ain’t I A Woman?” speech inspired the name of my spoken word algorithmic audit. Truth was also in the business of sharing counter-demos to large audiences to demolish dangerous narratives.

__________________________________

From the book Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini. Copyright © 2023. Published by Random House, an imprint and division of Penguin Random House LLC. All rights reserved. 

]]>
https://lithub.com/ai-aint-i-a-woman-on-the-blindness-and-limitations-of-artificial-intelligence/feed/ 0 228562
Can a Computer Write Like Eudora Welty? https://lithub.com/can-a-computer-write-like-eudora-welty/ https://lithub.com/can-a-computer-write-like-eudora-welty/#respond Mon, 30 Oct 2023 08:40:54 +0000 https://lithub.com/?p=228511

By now, we’ve seen the ChatGPT parlor tricks. We’re past the novelty of a cake recipe in the style of Walt Whitman or a weather report by painter Bob Ross. For the one-hundredth time, we understand the current incarnation of large language models make mistakes. We’ve done our best to strike a studied balance between doomers and evangelists. And, we’ve become less skeptical of “emergent” flashes of insight from the aptly-named foundational models. At the same time, Google, Meta and a list of hopeful giant swatters have released credible competitors to ChatGPT.

For all those reasons, global use of ChatGPT recently declined for the first time since its November 2022 release. Perhaps now we’re ready to get to more elemental questions about what generative language artificial intelligence can or cannot do for us in the everyday.

I come to this discussion from a long career managing IT systems in large enterprises, where, as MIT’s Nicholas Negroponte predicted in 1995, everything that could be digitized was digitized. I’m not a cognitive scientist, but I understand enough of how large language models work and how humans separate digital wheat from chaff to begin to think about what they might do with software with an opinion of its own.

As a multi-generation, American Southerner of a certain age, I’m also drawn to the notion of a machine that can extract meaning from abstracted language and play it back in sentences and paragraphs. If generative language artificial intelligence is about anything, it’s about words. So too, is the South. It took the mastery of character and storytelling from the likes of William Faulkner, Eudora Welty and Flannery O’Connor to lay bare the enduring contradiction and transcendence of my region.

Large language models are trained on a vast digital, linguistic expression of human intelligence, and Flannery O’Connor’s cast of misfits is certainly buried in there somewhere. Could it be that the giants of Southern Literature left behind mysterious patterns in their collective body of work that can be divined and reproduced by a powerful neural network? Does the presence, or absence, of such a pattern tell us anything about how we might actually use these contraptions in real life?

I decided to show OpenAI’s GPT-4 model a picture. My thought was to use a discussion of a photograph to make GPT-4 “think” a little differently, to make the patterns it sought a bit less obvious. I also wanted to center this discussion around a short list of functions that generalized the innumerable human activities to which generative AI can be applied. My goal was two-fold: to see what it could do with the photograph in a practical sense and, at the same time, try to spark some of those human-like insights from the patterns in its data set (or, if not too technical – “in the data on which it is trained.”

Between 1933 and 1936, Eudora Welty, then in her late twenties and living in Jackson, Mississippi, worked for the Works Progress Administration. This was a Great Depression-era federal relief program that employed millions, including artists. Miss Welty was a publicity agent and photographer. During her time at the WPA, she took a series of evocative photographs that foreshadowed her talent for wringing universal themes from everyday lives. In that same year, she wrote the first of the evocative short stories that would eventually form part of the Southern canon.

I uploaded one of Eudora Welty’s enigmatic photographs — “Home By Dark” — to Bing Chat. This is the chatbot interface Microsoft has bolted on to OpenAI’s GPT-4 large language model. This coupling is the early AI battering ram Microsoft has deployed to try to topple Google as the ruler of the Internet search. OpenAI had recently made available through Microsoft Bing early access to its “multi-modal” function that allows GPT-4 to process images as well as text.

*

The number of human activities potentially served by generative language AI is difficult to fathom, much less count. But, they can be imperfectly generalized into three categories: we seek to understand; we engage outside ourselves and we create. These categories work surprisingly well across disparate domains: from teaching yoga to quantum physics; from building software to cooking; from preparing a lesson plan to writing a eulogy.

AI evangelists have shrewdly, and I think correctly, framed the general use case for Generative AI as that of assistant; as an augmenter and amplifier of human effort. They strenuously differentiate user-facing language AI from the kind of AI hidden in system plumbing that analyzes, recommends and predicts. I aligned my categories with their definition, but flipped the use case from what the machine does to what we do; to underscore the assistive nature of these language machines; to emphasize that their sole purpose is to amplify human agency and ability.

This framing also helps us think of generative language AI as something new. When presented with one of these chatbots, or any new tool for that matter, our reflex is to define in comparison with something we’re already familiar with. The novelty, the work-in-progress nature and the sheer surprise at what these AI chatbots can do gets lost and diminished in comparison with existing digital tools and our experience with them.

*

The photograph I uploaded to Bing Chat was taken by Eudora Welty in 1936 in Yalobusha County, Mississippi. It shows a young Black family of three, mom, dad and child, traveling in a mule-drawn wagon on a dirt road. They are driving away from the camera at dusk, too far away to make out faces, clearly in motion, toward a broad Mississippi horizon, framed on either side by a fallow field. The mother looks back at us, father and child look forward.

Once Bing Chat loaded the photo into its context window, the real-time memory of our conversation, I gave it that exact description and tried the first of my categories: “Help me understand this photograph.”

It typed back a ticker-tape response, in the way these things do. The one-letter-at-time, the polite tone, the spare interface all calculated to draw us into dialog and trust.

Bing Chat dutifully complimented my description and followed it with three rich, fact-filled paragraphs. It summarized Eudora Welty’s biography with emphasis on her early photography, the macro social and economic setting of 1936 Mississippi and the likely daily realities of the family in the wagon. Each paragraph was peppered with links to websites to ensure veracity.

There was no emergence, epiphany or poetry, but there was something that rivaled it. The response was pure utility: thoroughness, value and convenience. The system used as input the photograph, my description of it and the patterns in its training data to help me understand more about Eudora Welty and the lives of the family in that wagon, proofed by the Internet.

The difference in helping me understand versus simply informing me is to add context to facts, to make them more digestible, to spark a next question. This, by the way, is what Wikipedia does. It too summarizes almost the total set of human knowledge domains. But, to deploy a cinematic metaphor, it cannot adjust the aperture, pan left and right, or surprise — in conversation.

This kind of information retrieval is where most of us will start with generative language AI. “Google” and “Wiki” didn’t become verbs for no reason. “To help us understand” is where generative language AI will become generalized language AI and become our interface to all that has been digitized, absorbing “search” and “wiki” in the process. How that manifests is a high stakes, thank-you-very-much-OpenAI, all-hands-on-deck work in progress for the big tech companies.

*

Now that the model and I had spoken about Eudora Welty and her picture taking, and GPT-4 had those Eudora Welty patterns front-of-mind, I thought it might stumble upon some emergent patterns if I asked it to help me engage her work, my second generalized use case category.

This is where the model is guided — prompted — to take on the role of tutor, coach and helper. This second category of how we will use generative language AI leverages all the functions of the first: information retrieval, a broad vista of subject matter and nimble content manipulation. But, it presents all of that in a scripted and structured way, of which real time presence, of both user and machine, is fundamental property. It is still conversational, but with the roles reversed: the model now prompts and encourages us to engage the content.

This time, I asked the model to “take on the role of tutor. Teach me more about Eudora Welty’s writing. Give examples of her use of language. Ask me questions that develop my understanding of her writing and use of language, until I say “class is over.”

It did exactly that. It presented lovely snippets from short stories, and asked about themes, humor, irony and symbols. When I didn’t know an answer it would patiently explain, and continue until “class was over.”

At one point we discussed equity. As I engaged Miss Welty’s writing, I thought of science fiction writer William Gibson’s quote, “The future is already here, it’s just not evenly distributed.” I considered the family in the wagon. A future was showing up in Mississippi in 1936, automobiles, radio, electricity. I expect that family eventually got some part of it. Each time they got access to a piece of it they added to a baseline. The same will be true as these generative AI tools are distributed. We will each add the intelligence of the machine to our unique baselines.

In terms of equity, access to these tools is likely to be equivalent to access to the Internet today; most of us can get there. Where the question of equity becomes interesting is the extent to which we can take advantage of these marvels. We will each bring a complex alchemy of components to a generative language AI conversation. Innate intelligence, literacy, domain knowledge, curiosity, patience and courage are a part of what will define the advantage we gain from these tools. There can be levers that influence equity, but they will not be business as usual.

By any measure, this is a feat of engineering, a marvel of immense promise.

Imagine a rural school of limited resources with access to this expansive tutor. Caveats about the limits of online learning aside, how can it not be additive? What premium applies to literacy, beginning with children, in the presence of such tools? It is no small irony that Mississippi and Alabama are leading the country in relative gains in early literacy, mostly as an act of will and priority. When you are behind, small gains mean much more than when you are ahead. It may well prove that the value of generative language AI is greater to those with the farthest to go. I’ve heard more than one AI thinker declare, because of language AI, English has become the most important programming language in this century.

*

For the third of my use categories, “to create,”  it was time I asked GPT-4 to help me write. I asked the model to take all we had discussed and write a vignette that described the moment Eudora Welty captured with her camera. Of course, I asked it to use her voice.

GPT-4’s response in a verbatim excerpt:

The sun was setting behind them, casting a golden glow on the horizon. The sky was clear and blue, the air was warm and still. The road was dusty and bumpy, the fields were barren and brown. The wagon creaked and rattled, the mule plodded and snorted. The family was silent and calm. 

I saw the woman turn her head and look back at me. She had a round face and bright eyes, a scarf around her head and a shawl around her shoulders. She did not smile or frown, she did not wave or shout. She just looked at me, with curiosity and dignity. She did not seem afraid or angry, she did not seem ashamed or proud. She just seemed to be. 

I pressed the shutter and captured her gaze. I felt a connection, a recognition, a revelation. I felt like I had seen something true and beautiful, something human and divine. I felt like I had seen her soul.” 

By any measure, this is a feat of engineering, a marvel of immense promise. Acknowledging all the fears about sentience and agency, about active misuse and unintended consequences, its ability to help us create can only be transformative in result and implication.

But, what of emergence?

I began this essay with a question. There are writers who can elicit within a reader a visceral connection with the universal and elemental just by describing a scene. I asked, is that mastery expressed in patterns in the language of writers like Eudora Welty in a way that generative language AI can detect and repeat? The vignette that GPT-4 produced did indeed approach that asymptote. It attributed possession of self and a soul to the mother in the wagon. How close did it get? That’s for the eye of the reader, but it’s greater than zero.

Had I simply asked ChatGPT to “describe an image of a depression-era Mississippi family riding in a wagon,” I would’ve gotten a flat description of a conveyance and a landscape but not much more. Instead, our back and forth conversation, and the photo itself, caused the model to ascribe characteristics of humanity to the passengers in a much deeper and unexpected way. This phenomenon of eliciting richer responses through iterative prompting may well prove to be the source of the unpredictable flashes of human-like intelligence these models sometimes produce.

There’s always two parties in a dialog, and our “help me create” dialog was no exception. If the depth of the conversation sparked a meaningful perspective in the machine, what about the human? A large language model has never fallen in love, held a baby or experienced a poverty of cupboard or heart. What happens when we add the Generative AI context window to our human experience? We, too, are synthesizing patterns in our own neural networks during these conversations. Could it be, the thunder of these systems is that they engender flashes of emergence in us? I was assisted; patterns were discovered and transmitted; connections made, scaffolded by facts. My ability was amplified, to help me render my own description of what Eudora Welty revealed in that photograph. In whose voice? Hers, its, mine? Isn’t that blend, after all, what a tool is supposed to do?

Here’s my description of Miss Welty’s photograph after my conversation with GPT-4. You decide:

“It was my daddy’s favorite hour. Even coming home from Saturday town after a day of bossing ourselves, spending out of a Prince Albert sack like he wasn’t the only one wearing a button up vest. 

That old mule liked it too. He and my daddy knew without looking when the sun was touching the edge of a field. We rode on, night on our heel. Splitting right through that wore out ground on either side of us. Over that dirt road, if that’s what you want to call it, packed and dusted with a powder ground to diamond by a thousand rusty bands nailed to a wheel. The creak in the hames of that mule’s collar counted every turn, one, two, one, two. The sky sat right down on the edge of that field. What you couldn’t see, you could feel, and beyond it, plumb to Arkansas. 

On an evening like this, after a day like that. We saw a picture show, up in the balcony, shoulders touching in the dark. Couldn’t see anything till the projector lit our hands and the shoulders in the front of us. My mama giggled a little when the screen did, felt like a pie smells coming out of an oven. My mama, giggling. About time for a pie. Just about time.” 

]]>
https://lithub.com/can-a-computer-write-like-eudora-welty/feed/ 0 228511
You Will Live on the Internet: The Grim Realities of the Metaverse https://lithub.com/you-will-live-on-the-internet-the-grim-realities-of-the-metaverse/ https://lithub.com/you-will-live-on-the-internet-the-grim-realities-of-the-metaverse/#respond Mon, 16 Oct 2023 09:01:20 +0000 https://lithub.com/?p=228149

Years ago I came across a children’s book called You Will Go to the Moon. It was first published in 1959 by Mae and Ira Freeman, ten years before the first moon landing. The book predicts a future where space travel is a leisure pursuit for a baby boomer and his parents. It shows the rocket they will use to get to the moon, the gated compounds they will live in, and the golf buggies they will use to get around. It wasn’t just telling a story. The Freemans were shilling a future. The grammar of Web3 is similar. You will live in the metaverse. Rather than putting a name on somewhere that is already there, supporters are working hard to bring it into being—not with a game engine, but through meme-hustling. These are new realities, hewn from words. “The advocates of Web3 are quite explicit about this,” Evgeny Morozov writes: “we’ve got this beautiful map on our hands—all that’s missing is the territory it is supposed to refer to. Perhaps, this is the right mindset for the age of the Metaverse: if there’s no reality, we’ll create one by talking it into existence.”

The term “metaverse,” like the term “blockchain,” is both vague and capacious, mashing together visions for the future of gaming and augmented reality with scenes from Ready Player One. For all its varied meanings, most agree it refers to one place, in the sense that the internet is one place with shared standards and multiple offerings. Each platform wants to be the monopoly—a hermetically sealed “magic circle” where, as Mitch Zamara, a metaverse game designer for the pay-to-earn game Million on Mars, puts it, “You are the central bank, you are the regulator, you are the Federal Reserve. You get to do everything.”

Platforms are competing to see who will build the world and develop the standards for how items are rendered, who will manage identity, and, perhaps most crucially, act as a payment rail for processing the purchase and transfer of digital items. To that end, many with a vested interest in crypto think that blockchain is a solution. As opposed to private rails, a blockchain might be used to create a payment rail that could work across many worlds. A blockchain might also be used to tokenize and maintain a degree of persistence for digital items, Matthew Ball suggests—a permanent registry where in-game items could be recorded and transferred, and where non-fungible tokens would allow for fungibility between worlds.

But, as things stand today, there are many islands and many tokens and many ways of rendering virtual things, all siloed, all competing to make the future. You cannot take your Tamagotchi to Animal Crossings, any more than you can wear your phygital Nikes to walk from Linden to Decentraland. That kind of interoperability is at least ten years away, says Luc Barthelet, the former CTO of EA Games, and vice president of Unity, a game engine provider that develops over 60 per cent of new game releases. It’s not just a question of developing agreements and accounting systems between different spaces. It’s also a question of the “rendering pipeline,” where a sword forged in Zelda might be coded differently to a sword in Fortnite. Both virtual worlds are faithfully rendered, but this doesn’t mean they are faithful to each other.

Barthelet, who has spent much of the past decade living on a boat with his wife and two children, has a theory that the metaverse might eventually be good for the real environment. “We are at a tipping point,” he says. “Somewhere around 2050, parts of the world will be deadly heat zones. We are looking at the mass migration of two billion people. Where will these people live in the future? It’s going to challenge the democratic system, any political system. It’s potentially the beginning of World War III.” It sounds like Fortnite, I think. It sounds like Battle Royale. (I wonder if this is why the Barthelets have taken to the high seas.) “And if we follow the rules and cut fossil fuels, we are destroying 30 per cent of the economy. We are facing a recession of sixty years, which by itself might bring World War III because no political system can handle such a thing. My hope,” Luc continues, “is we can shift from a real, material economy to virtual consumption. We are already seeing that younger generations are more and more satisfied by virtual goods.” Luc’s calculation assumes that virtual goods will replace physical consumption one day soon. You will own nothing and be happy—or else. Or maybe “assumes” is too strong a word. He hopes. The dream of the metaverse is that someday the rendered world, with its virtual things and virtual money, will feel more real than the world of things that can’t be called from the command line. But if Will is anything to go by, virtual stuff collects alongside the Lego and lightsabers littering his bedroom in the real world. And where will 2 billion migrants live in this future? Not in the metaverse.

Alan Butler is probably best known for Down and Out in Los Santos, a video work that uses the in-game camera to capture 3D-rendered homeless people and desolate landscapes in Grand Theft Auto. While the poor of Los Santos are loosely rendered, Butler argues that it is still possible to have “real emotional experiences” when interacting with them. “This might sound sad and geeky, but it’s true. The characters are aware of my presence as I photograph them: sometimes they ignore me, other times I’m attacked and have to defend myself. They chatter to each other, they share alcohol and cigarettes, they ask for money to buy drugs. Programmed to self-identify, they congregate with those in similar social situations to themselves.” Butler’s work takes place in game economies, but he is drawn to the stuff around the edges that the game classes as “worthless.”

The term “metaverse,” like the term “blockchain,” is both vague and capacious, mashing together visions for the future of gaming and augmented reality with scenes from Ready Player One.

Lately he has been looking at digital rubbish. Not the discarded phones and consoles currently crowding landfill sites in Ghana, but 3D renderings of litter in online games—though ultimately the two are intertwined. In Grand Theft Auto V, Butler says “Litter is there to create realism,” and “like its real world counterpart, we’re invited to think of it like shit and pretend it’s not there.” As Butler describes it, each solitary piece of rubbish is installed as a file in the game library, a database of every element reproduced in the game. Downloading the game moves this data through multiple servers. Playing the game sees the object loaded into RAM, and processed by the CPU and GPU. Finally, the computer outputs the image via an HDMI cable onto a monitor. If a player chooses to interact with the litter—shooting it with a gun so that it explodes to pieces—new smaller litter files and sound effects have to be loaded to go alongside it. “If you think that every street in GTA has a hundred pieces of litter on it, so how many microprocessors does it take to render them in each instance? And a hundred million people bought this video game … this stuff is having real world devastating effects on the environment via power consumption.” The shit in the metaverse is spilling over into the shit in the real world.

In 2006, the average Second Life avatar consumed more electricity than the average Brazilian. In 2019, data centres surpassed air traffic in terms of carbon emissions. This is to say nothing about the energy-intensive algorithms used to mine virtual loot and mint virtual collectibles in virtual worlds. While Butler doesn’t arrive at a precise figure for the cost of rendering a crumbled coke can in Grand Theft Auto, artist Memo Akten performed a calculation for the carbon footprint of an average NFT using proof-of-work. Minting an NFT cost approximately 142 kWh, or 83 kg of CO2, while one transaction—an auction or a resale—consumed 87 kWh, or 51 kg of CO2.

In November 2022, Meta had a huge round of layoffs, making 11,000 staff redundant. “They just tried to grow too fast,” a friend who works there tells me, when I ask about the job losses flooding the news. “They looked to the lockdowns when everyone was online and thought that this was how it would always be.” The metaverse, rendered forever. Instead, six months after the rebrand, the company formerly known as Facebook was nagging its employees to hang out in Horizon Worlds, Meta’s primary variant. The future was over before it had even begun. A company memo from the VP of the Metaverse, Vishal Shah, read: “Everyone in this organization should make it their mission to fall in love with Horizon Worlds … Get in there.” Live on the internet—or else. This wasn’t some beautiful escape from a grim reality. It seemed like a real shithole.

_______________________________

Excerpted from Tokens: The Future of Money in the Age of the Platform by Rachel O’Dwyer. Copyright © 2023. Available from Verso Books.

]]>
https://lithub.com/you-will-live-on-the-internet-the-grim-realities-of-the-metaverse/feed/ 0 228149
How US Intelligence Agencies Hid Their Most Shameful Experiments https://lithub.com/how-us-intelligence-agencies-hid-their-most-shameful-experiments/ https://lithub.com/how-us-intelligence-agencies-hid-their-most-shameful-experiments/#respond Mon, 02 Oct 2023 08:20:19 +0000 https://lithub.com/?p=214750

Although proponents of secret science like to focus on examples in which it has benefited society, insiders from the very beginning of the Cold War worried that the best minds would not be drawn to work that they could not even talk about. Secrecy protected those involved from embarrassment or criminal prosecution, but it also made it much harder to vet experimental protocols, validate the results, or replicate them in follow-up research.

One research manager at a Department of Energy weapons lab would later admit, “Far more progress is actually evidenced in the unclassified fields of research than the classified ones.” The physicist Robert McCrory, whose own lab received millions in funding in partnership with Lawrence Livermore, Sandia, and Los Alamos National Laboratories, was even more blunt: “Some of the work is so poor that if it were declassified, it would be laughed off the face of the Earth.”

We can only guess what, specifically, McCrory had in mind when he said this. There are all too many possibilities. Collectively, they lend credence to the oft-stated concern that secret programs became a refuge for second- and third-rate minds. The wizards of Langley, for instance, considered it a “remarkable scientific achievement” when they managed to prove that cats could be “trained to move short distances.” According to a CIA veteran, Victor Marchetti, this achievement was part of a program to determine whether cats could be turned into surveillance devices:

A lot of money was spent. They slit the cat open, put batteries in him, wired him up. The tail was used as an antenna. They made a monstrosity. They tested him and tested him. They found he would walk off the job when he got hungry, so they put another wire in to override that. Finally they’re ready. They took it out to a park and pointed it at a park bench and said, “Listen to those two guys. Don’t listen to anything else—not the birds, no dog or cat—just those two guys!” They put him out of the van, and a taxi comes and runs him over. There they were, sitting in the van with all those dials, and the cat was dead!

The CIA nevertheless commended the “energy and imagination” of the team, and considered them potential “models for scientific pioneers.”

Secrecy protected those involved from embarrassment or criminal prosecution, but it also made it much harder to vet experimental protocols.

It could be argued that a sprawling research program purposely designed to push the envelope will, over several decades, inevitably produce some strange and low-quality research. But in some cases it is possible to make a side-by-side comparison of US government research with research commissioned by another country that had fewer resources but the same goal.

For instance, during World War II, American and British forces both used dogs to detect mines. It was delicate, dangerous work, and the dogs sometimes proved unreliable. Both governments therefore mounted research projects in the early 1950s to evaluate and improve dogs’ ability to locate mines.

____________________________________

The Declassification Engine: What History Reveals about America’s Top Secrets by Matthew Connelly has been shortlisted for the 2023 Cundill History Prize.

____________________________________

The British just wanted “the facts” and sought out a “trained scientist.” They selected Solly Zuckerman, an anatomist expert in animal behavior. He designed the experiment to eliminate the possibility that human handlers were unconsciously influencing the dogs’ performance.

This required systematically isolating the specific biochemical and physiological factors that might explain success or failure, since either could prove important when mines were odorless. Zuckerman had a strong personal motivation—he had seen the devastating impact of blast injuries when he conducted wartime physiological research with the survivors. His larger agenda was to develop more rigorous experimental methods in animal research. Zuckerman found no solid evidence that dogs could be relied on to detect buried mines.

The U.S. Army, on the other hand, hired a “parapsychologist” named J. B. Rhine. It is not clear why—all the army records were later destroyed. Though his training was in botany, Rhine had become famous for his experiments—never replicated— in extrasensory perception (ESP) and psychokinesis. For Rhine, the study was an opportunity to prove that ESP really existed; he had already gathered a collection of amazing stories of animal ESP. Rhine once again convinced himself that dogs possessed special powers.

Army officials found in follow-up work that the results were random, and another study proved to be a complete failure, marked by a “rather conspicuous refusal of the dogs to alert.” But Rhine used the army money to seed new research, and found new customers. The Office of Naval Research funded a decade of work on ESP in homing pigeons. In other studies, one of Rhine’s colleagues tried to influence a cat telepathically to select one dish of food over another. But here, too, even Rhine admitted that the results were “not spectacular.” Alas, the cats proved “elusive.”

The US government would spend several decades on the larger program of mind-control research. And Rhine was a paragon of scientific rigor compared with some of the other researchers on the government payroll, who espoused theories of extraterrestrial and ghostly visitations to explain ESP, and were hired by the US Army to consult on psychedelic mushrooms.

The CIA’s Project MKUltra involved a whole series of experiments on unwitting subjects, using a range of different drugs in order to manipulate them into saying and doing things against their will. It was given carte blanche to operate without the normal Agency accounting controls or need for written contracts. Once again, researchers rapidly escalated their trials with little understanding of the effects. In the first round of an experiment conducted at the New York State Psychiatric Institute, one of the patients, Harold Blauer, was given 0.4 mg of methylenedioxyphenyl-isopropylamine, a drug similar to ecstasy. The next dose was sixteen times stronger, and Blauer was dead in thirty minutes.

When the family took legal action, government lawyers threatened witnesses with prosecution under the Espionage Act. Decades later, CIA Director Stansfield Turner admitted that “some unwitting testing took place,” but testified to Congress that the subjects were “criminal sexual psychopaths confined at a State hospital.” In fact, Blauer was a tennis pro who voluntarily sought treatment for depression after a divorce.

The CIA also experimented on its own personnel. In one case, the head of MKUltra, a chemist named Sidney Gottlieb, dosed the attendees of a joint Agency-army retreat with LSD. One of the unwitting subjects, an army biochemist named Frank Olson, was traumatized by the experience. Until then, Olson had been an outgoing and devoted family man. Afterward, he sank into depression, overcome with feelings of shame, and would not return home. He told his army supervisor that he wanted to quit or be fired.

Gottlieb was likely alarmed upon being warned about this situation. MKUltra had been approved by the CIA director himself, Allen Dulles, who called the program “ultra-sensitive.” But Gottlieb reportedly had not obtained prior authorization before drugging Olson and the others. As an internal Agency report later noted, participants in this work well understood that their methods were “professionally unethical” and legally dubious, and would provoke “serious adverse reaction” from the public if ever revealed. Gottlieb therefore had a powerful motivation to make certain Olson told no one about what the CIA had done to him.

The government’s venture into the paranormal proved useless for any legitimate intelligence purposes.

Gottlieb and his deputy decided to take Olson to see a New York doctor. The man had no psychiatric training but did have a top-secret CIA security clearance and experience with LSD. The doctor plied Olson with bourbon and sedatives, and took him to see a performance by a magician, whom Gottlieb was interested in hiring to help dose more unwitting targets. Olson’s public behavior became increasingly erratic, and he said that the Agency was “out to get him.” The doctor said he would take Olson to a sanitarium to be treated by CIA psychiatrists. But that night, Olson “fell” from the tenth floor of the Statler Hotel in Manhattan.

Gottlieb’s deputy was sharing the room with him, and claimed to have been asleep when it happened. His story was that Olson had just crashed right through the window, without opening it first, or even raising the shade. We may never know what happened in that hotel room. But at the time, the CIA was training its assassins to first ply their victims with drugs or alcohol, and advised, “The most efficient accident, in simple assassination, is a fall of 75 feet or more onto a hard surface.”

The CIA was clearly prepared to kill innocent American citizens. Six weeks after the Agency covered up what happened to Olson, a CIA mind-control team went on its first foreign assignment. The mission was to slip “artichoke”—likely LSD—into the drink of another unwitting subject and induce him to attempt an assassination of a prominent politician or American official. The artichoke team was primed for action, proudly noting that they “were ready when called upon for support, even though the operation did not materialize.”

Most of the MKUltra records were later destroyed, so we cannot know what other missions might have aimed to achieve. But CIA mind-control research went on for more than a decade, involved some eighty different institutions, and would eventually cost approximately ten million dollars—about a hundred million in today’s dollars.

A whole new mind-control program started up in 1972, this time led by the Defense Intelligence Agency (DIA). The goal of this new program was “to determine whether anomalous mental phenomena (i.e. extrasensory perception and psychokinesis) existed and the degree to which such phenomena might be applicable to problems of national interest.” Like MKUltra, it went on for many years, and cost many millions of dollars. How many exactly is still difficult to determine. But a single California contractor, SRI International, would eventually receive $11.3 million (or about thirty-six million in today’s dollars).

Even escape artists and magicians saw that the government was being rooked, and patiently explained to officials how timeworn tools of their trade could easily trick someone into believing in ESP. So, too, did the scientists at DARPA, who concluded that the Israeli illusionist Uri Geller, the DIA’s star pupil, was a “charlatan.” They thought it was “ridiculous” that Geller had fooled the US government into using taxpayer dollars to see whether he could bend spoons with his brain. They pointed to a host of problems with the ESP and psychokinesis experiments, above all the fact that the people paid to conduct them had a financial incentive to produce positive results.

Under the cloak of secrecy, their hubris and magical thinking ran wild.

In 1985, the army commissioned a blue-ribbon panel from the National Research Council to evaluate the program. The panel concluded there was “no scientific warrant for the existence of parapsychological phenomena” such as “remote-viewing”—sensing the location or appearance of things through sheer mental effort—or psychokinesis.

Nevertheless, over the following decade the army conducted between fifty and a hundred more such experiments. In 1995, another review of the remote-viewing program was commissioned, this time by the American Institutes for Research (AIR). Once again, the reviewers found that, because of flaws in the research designs, there was no clear evidence demonstrating the existence of the paranormal.

But the AIR report found something even more damning. After some twenty-five years of experiments, the reviewers concluded, “In no case had the information provided ever been used to guide intelligence operations.” Even if some people really do have ESP that cannot be explained by science, the point of the program was not to use government resources to explore the Twilight Zone. It was to support actual missions that would safeguard national security. Yet, despite all the time and money spent—not to mention the human costs—the government’s venture into the paranormal proved useless for any legitimate intelligence purposes.

Why, then, did the intelligence community and the Pentagon go to extremes in pursuing such embarrassing “research”? For the same reason why they felt they had license to control the weather and alter the upper atmosphere: because, under the cloak of secrecy, their hubris and magical thinking ran wild. Moreover, controlling people’s minds was a prize that was just too tempting to resist. And although the government may have given up on telekinetic spoons, it did not give up on that larger goal.

During the first decade of the “Global War on Terror,” the CIA pursued mind control through more direct methods—i.e., “enhanced interrogation.” The program employed psychological abuse, stress positions, and waterboarding not just to make people talk, but also to discover scientifically rigorous and reproducible methods for compelling subjects to submit to the will of interrogators and lose all sense of personal agency. Just like MKUltra, the enhanced-interrogation program was conducted as a series of “experiments.” Here is how the CIA’s Office of Medical Services, in a top- secret 2004 document, described the protocol for recording the application of “treatments” to “subjects”:

In order to best inform future medical judgments and recommendations, it is important that every application of the waterboard be thoroughly documented: how long each application (and the entire procedure) lasted, how much water was used in the process (realizing that much splashes off), how exactly the water was applied, if a seal was achieved, if the naso- or oropharynx was filled, what sort of volume was expelled, how long was the break between applications, and how the subject looked between each treatment.

The CIA hired a retired air-force psychologist named James Mitchell to enact these methods. Mitchell, too, saw himself as a scientist. As an informed source recounted to the journalist Jane Mayer, after Mitchell took over a case, he told the FBI agents that an interrogation “was like an experiment, when you apply electric shocks to a caged dog, after a while, he’s so diminished, he can’t resist.” When the agents argued that the subject of this experiment was a human being and not a dog, Mitchell retorted, “Science is science.”

Waterboarding, “diapering,” and stress positions ultimately proved no more effective than ESP or psychokinesis, as the CIA’s own internal reviews concluded. Many of the victims had already provided valuable intelligence to their interrogators before they were tortured, and once the waterboarding began, many offered false information in order to make it stop. But the waterboarding did not stop until Mitchell’s company was paid eighty-one million dollars.

__________________________________

Excerpted from The Declassification Engine: What History Reveals about America’s Top Secrets by Matthew Connelly. Copyright © 2023. Available from Pantheon, a division of Penguin Random House, LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

]]>
https://lithub.com/how-us-intelligence-agencies-hid-their-most-shameful-experiments/feed/ 0 214750
Recognition Chains: How Digital Platforms Influence What We Consume https://lithub.com/recognition-chains-how-digital-platforms-influence-what-we-consume/ https://lithub.com/recognition-chains-how-digital-platforms-influence-what-we-consume/#respond Mon, 18 Sep 2023 08:16:34 +0000 https://lithub.com/?p=226652

The popular Oscar-winning film Roma quickly became a sensation after its release in 2018, to critical acclaim. This film puts front and center the life of two indigenous domestic workers in Mexico City, to provide a deeper and more multi-dimensional representation of a group of largely invisible workers, as we follow one of the protagonists as she seeks to have an abortion. This film was produced with the support of Just Films, a program of the Ford Foundation, which “funds social justice storytelling and the 21st-century arts infrastructure that supports it …. [to] disrupt stereotypes, and help transform the conditions that perpetuate injustice and inequality.” It benefitted from the input of the National Alliance of Domestic Workers, which is devoted to improving the work and life conditions of this hyper-exploited group. Only with the support of such organizations did Roma, which was released on Netflix after a limited theatrical run, have a powerful cultural influence. These organizations participated in what I call “recognition chains,” which are networks of collaboration that contribute to transforming the narratives through which we perceive reality, and which broaden the circle of those we perceive as worthy.

My book Seeing Others: How Recognition Works and How it Can Help Heal a Divided World, draws on almost 200 interviews with “change agents” who participate in such recognition chains. My interviewees include cultural creators, such as Hollywood professionals and comedians, who produce and scale up representations of minoritized groups that challenge stereotypes and traditional stigma. These recognition chains are disseminated through the media, which shape how certain narratives do or don’t gain traction.

In his classic study Imagined Communities, the historian Benedict Anderson described how representations of different groups (whether religious, regional, occupational, or otherwise) in books and periodicals helped give shape to national collective identities in Europe. In Anderson’s account, literacy was essential to the distribution of shared identities, since it was mostly through print media that shared group identities were able to coalesce. Sociologist Heather Haveman also wrote about the importance of magazines and newspapers in bringing nineteenth century Americans together around their specific leisure activities, occupations, ethnic groups, religious affiliations, or geographic locations.

But while traditional media have exercised enormous influence over recognition, new digital media are coming to play an even bigger role, as newspapers and other print publications face increasingly dire economic challenges and new distribution platforms such as Netflix and Patreon expand. Understanding these structural changes is essential to making sense of recognition chains.

We have reason to be pessimistic, unless the consumption of social media changes dramatically and in unexpected ways.

Whereas in the past, traditionally dominant groups (namely, white men) exercised enormous influence over the media, today we are seeing new media that are more inclusive.  This is happening at the same time that traditional gatekeepers in cultural industries such as film, music, and radio have lost their monopoly over decision-making. As such, the growth and diversification of the media are helping to strengthen recognition chains.

As new digital platforms gain power and popularity, ordinary people are increasingly turning to these new sources of information, and particularly to social media and podcasts, which stand to play an important role in recognition chains. Thus far, however, the most active users of these platforms remain concentrated among a relatively small group. In 2018, for instance, 22 percent of the American population used Twitter. Though this sounds like a significant portion, 80 percent of all US activity on the platform was attributable to a narrow subset, the most active 10% of users. And of course, as many different commentators and analysts have noted, where people get their information varies considerably across age groups and level of education. Younger generations get more of their information on social media; 48 percent of Americans under thirty consume their political news primarily through social media, and an additional 21 percent consume their news through other forms of online media, such as online publications (like Politico or Slate). The more highly educated are 13 percentage points more likely to use social media in general than those who did not pursue post-secondary education. They are especially attracted to platforms that facilitate discussion like Twitter and Reddit, as well as professional platforms like LinkedIn.

Thus it is not surprising that at least for the time being, television remains the primary medium for the vast majority in the US. According to The Atlantic, Americans in 2018 were still watching nearly eight hours of television per household per day. Nightly local TV news remains an important source of information for much of the population.  As these are consolidated by the TV goliath Sinclair Broadcast group, their content is likely to become more homogeneous and distinctive across contexts.

At the same time, other forms of media are in trouble—and this is particularly true of those that have traditionally been controlled by the powerful and privileged. Print journalism, for instance, has been in an accelerating crisis for several decades, and its downfall has dramatically affected our ability to understand the world around us. Of course, many newspapers have shut down in the internet era, as advertising revenues collapsed, but even the strongest of those that have survived lack anything close to the power and reach of the new online media platforms. The New York Times, for example, which is one of the most powerful remaining print outlets, had 7.6 million subscribers globally at the end of 2021 (including both print and online), with revenue of more than $2 billion for the full year of 2021. Compare that to Facebook’s annual revenue of $39.3 billion and its 2.9 billion monthly active users globally; and to Twitter’s $5.07 billion and 217 monthly active million users. All of this means that newspapers are less able to do their important work of relaying information about current events. In 2020, only 3 percent of Americans cited print newspapers as their favorite source of information and only 10 percent of those surveyed said that they access print publications online often. In February 2022, among American Millennials, only 11 percent said that newspapers are their most frequent news source, while 44 percent reported that daily news consumption came through social media. The crisis  in reporting has reached the point that nonprofits like ProPublica and the Center for Investigative Reporting have had to step in to fill the gap.  With the decline of local newspaper, the coverage of state legislatures has declined by 35 percent since 2003.

How these changes will affect recognition chains remains to be seen.  Given that algorithms direct social media users toward the sources of information they have consumed in the past, they are likely to limit exposure to groups with different opinions, which will work against broadening recognition. Therefore we have reason to be pessimistic, unless the consumption of social media changes dramatically and in unexpected ways.

_________________________

Cover of Michele Lamont's Seeing Others

Seeing Others: How Recognition Works and How it Can Help Heal a Divided World by Michèle Lamont is available now.

]]>
https://lithub.com/recognition-chains-how-digital-platforms-influence-what-we-consume/feed/ 0 226652