layer on layer
adds just enough thought
to form, to fortify
layer on layer
layer on layer
adds just enough thought
to form, to fortify
I finished reading World Without Mind: The Existential Threat of Big Tech last week. Given the Zuckerberg hearings in the past fortnight…this book is certainly of-the-moment, which means that (among other things) it’s overdue at the library.
In some respects, Franklin Foer strikes me as Neil Postman’s heir. As Technopoly warned, technology has changed and expanded (and continues to do so) so quickly that it is difficult for anyone to be certain exactly what ideas, mores, or other cultural artifacts might be jettisoned as a matter of course. There is no time to appreciate, much less anticipate, all the changes technology can wreak.
Foer alternates his attention between the tech itself and those who wield it. GAFA (Google, Amazon, Facebook, Apple), he says, have “imperiled the way we think” by leveraging their “intoxicating convenience” to “press [people] into conformity.” He discusses the power of GAFA’s curation as manipulation of knowledge and an erasure of free will, but I’m convinced that Amazon making it easy to click on a book does not mean Amazon has forced me to buy it. The gap between consideration and action is still, thankfully, large.
Among Foer’s other concerns is the fact that, increasingly, decision-making – and, perhaps, more creative work – is being given over to algorithms instead of humans. Given his profession (staff writer/editor), one can understand why he’d feel threatened by the specter of automatically-composed reports. He also seems somewhat concerned by Google’s pursuit of AI. I don’t believe AI is actually possible, despite what Descartes thinks about humans as complex organic machines, so it seems to me that the bigger problem is Google’s tendency to ignore copyright law in its quest to digitize all published books as grist for the AI mill.
Foer is also Postman’s heir in that the solutions he proffers are weak in the face of the huge problems he diagnoses. He describes how much these corporations lobby in Washington, details some of the strategies they’ve used to avoid paying hundreds of millions of dollars in taxes, notes how the overlap of data and personal transparency is three steps away from a certain sort of authoritarianism, and notes again the ascendancy of algorithms – then states a need for antitrust legislation to break up this new type of monopoly, and a Data Protection Agency to force GAFA to give consumers a way to purge their data.
I don’t know enough about the industry to know whether this is even possible, much less likely. If these corporations are already guilty of tax evasion on a huge scale, how would you force them to play nice with data, and why would you expect them to obey new laws about it? “Google’s leadership doesn’t care terribly much about precedent or law,” according to one of the company’s attorneys (regarding the book digitizing effort in particular, but surely it applies more broadly). Wired’s writeup of the hearings seems to agree: “Because these businesses operate differently from those in more traditional industries, they must be regulated differently. Congress, and by extension regulators, don’t understand enough about these businesses to regulate them, and risk further entrenching their power by attempting.”
Silicon Valley apparently believes that regulations or anti-trust efforts can’t threaten Facebook’s dominance, that privacy controls won’t make Facebook more appealing to consumers, and that those currently at the helm have good intentions.
I’m skeptical as to that last point. As Lewis put it:
Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.
So, what do we do about this, aside from government-based solutions that will probably fail in the face of an army of attorneys? Foer has some recommendations for fighting at the grassroots level. The popularity of organic, whole foods (and similar food-based trends) gives him hope that people who care about what they put in their mouths will also come to care about what they put in their brains, and where it came from.
He also proposes that “cultivated” people pay to keep journalism alive, that the pursuit of objectified cultural capital would draw sufficient funding to support journalism as a livelihood. I rather think he conflates journalism with any and all writing or publishing, but either way the point stands.
(An aside: reading this book in between movements of Verdi’s Requiem was curiously appropriate and beneficial. It gives one hope for the continuation of the arts; it reminds the soul of God, of religion, of miracles; and it also grants some perspective: no wonder how much control these companies have, they cannot destroy my inheritance.)
Some may find Foer too liberal for their taste; others might long for an orderly history of technological development that reads less like an old boys’ club. Some, like me, might find Foer unable or unwilling to discuss humans as humans: interested in convenience until the tipping point where other considerations take precedence; stubborn; guided by the intangible and the numinous, not merely by what Big Tech serves up. But overall, World Without Mind is a warning well worth reading, illuminating how privacy is all too often the price paid for convenient consumption. Hopefully it is a timely admonition, rather than a moment too late.
Before I begin, let me put a disclaimer here: these are hastily assembled thoughts, engaging with the subject at hand before spending hours or days reading up on it, pondering it, and defining all my terms more effectively. If you like, consider this whole post a placeholder for later thought.
Personality, the will, sentience is solely a product of the breath of life, given by God, rendering wholly artificial intelligence impossible to create.
If you took a human person, replaced his limbs with prosthetics, compensated for the destruction of his nerves with some manner of electronic signals, gave him other replacements for his original organs or viscera, rigged up an elaborate support system for his brain: all this still would not make the intelligence itself, the person, artificial. It isn’t all mechanical. I disbelieve in a mechanical mind (though certain aspects of a generic human brain, to a certain extent, give physical or mechanical evidence of the processes occurring therein).
If a neural network produces anything, it does so through training. It doesn’t actually have its own intelligence to go on, just the promptings of an actual human (or, perhaps, a whole lot of data gleaned from a great many actual humans).
Can it, per se, ever recognize humor? Consider the InspiroBot Inspirational Images Generator. The generator generates; the human looks at the image and caption together, and that interaction is where the humor happens. It is humorous because the human mind recognizes the absurdity. It is humorous because of the human mind being struck by the unexpected.
I suppose you could argue that the generator creates the humor by presenting the unexpected. But I would then argue that the generator generates as it is trained to generate, making this ultimately a human creation.
As in my post of last week, I am in the position of reviewing a book long after I first read it. However, after reading Neil Postman’s Technopoly last March, I reread it in May, took copious notes on it in June, and still have it to hand for further consideration, because this book gave me so very much to ruminate upon.
Having stumbled over the book’s prologue while idly Googling the story of King Thamus and the Egyptian god of invention Theuth, I wondered how I had never heard of this author before. Postman wrote at least seventeen books about the nature of education, how various technologies and media can contribute to (or interfere with) it, and the effect this all has on humans, particularly children. The bulk of his work and writing occurred between 1960 and 1990, and Technopoly was published in 1992.
All of this is to say that, though Postman analyzes a technological landscape over twenty years old, so much of it still rings true that the man seems somehow prophetic.
His thesis: technology appears to be a friend, but does not give us time for reflection on potential losses before it changes the world. As scientists and inventors strive to make life easier, healthier, and longer…technology begins to usurp the place of our critical thinking and our consciences. It is so intertwined with modern life that most of us have difficulty finding a distant enough vantage point to see what consequences, secretly intended or unintentional, may follow. As King Thamus tells Theuth (or Thoth), “the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it.” The king referred to writing, distinguishing memory and wisdom themselves from the recollection and appearance of wisdom which writing would make possible.
Basically, technology can be used for good or ill – but once the tool is in the culture, it will change it: not just here or there, but throughout. For example, a culture that can produce written records can – eventually will – shift away from having an oral tradition. Hurrying toward what is ahead, the inventor does not necessarily examine all these implications, all the ways his invention will change the world – nor do those using it ask, typically. Instead, everyone emphasizes their hope for all the good this invention will bring. The culture thus conspires against itself: the onlookers cannot know how this novelty will change their existence, nor that they might well end up on “the losing side” of a technology.
Maintaining that technologies reflect and create the ways people perceive reality, Postman sets out his definitions (by description) of tool-using cultures, technocracy, and technopoly. Tool-using cultures use tools – many or few, simple or sophisticated, beloved or held in contempt – to solve problems of physical life, or to serve the symbolic world (e.g., art, politics, myth, ritual, religion). The tools are determined and directed by the culture, thus they generally do not attack the dignity or integrity of it. Rather, the culture is unified in belief (possibly theocratic), which provides order and meaning for the people within it.
He does list some tools which can intrude on cultural beliefs – the stirrup, the clock, mills, matches, and rifles – so I think those can be tied to the rise of Technocracy. Here, tools are central to the world of thought. Technocracy disdains and subordinates, but does not destroy, social or symbolic traditions (partly because it’s too new to change venerable phenomena like elder wisdom, regional pride, or social structure; partly because it’s busy doing other things). Postman notes that Western technocracies were rooted in the clock, the printing press, and the telescope: three tools which changed the fabric of how society organized time, disseminated many new ideas to all sorts of new readers, and how men viewed the cosmos and their place in it. Listing off various natural philosophers-become-scientists, Postman avers that the precision of man’s knowledge of the cosmos “collapsed [the] moral center of gravity,” causing “the psychic desolation of an unfathomable universe.” Even so, the believing scientists remained faithful, concerning themselves with learning and truth, not power or progress…until Francis Bacon came along. Thereafter, people came to believe that knowledge was power and continuing progress was possible, while their belief in God was shaken if not obliterated.
More inventions, more factories, more production, faster communication…generally, people learned how to make this all happen, but didn’t spend as much time asking why. And so western society approached Technopoly: a totalitarian technocracy, wherein efficiency, objective data, and unambiguous calculation is valued more highly than human judgment, human dignity, or the complexity of the unmeasurable. “Lacking a lucid set of ethics and having rejected tradition, Technopoly searches for a source of authority and finds it in the idea of statistical objectivity.” Thus ideas are reduced to objects, abstractions are ranked, and realities which were never meant to be reduced to numbers – human intelligence, a student’s understanding of a subject, beauty, ability, how people regard political candidates, etc. – are flattened and simplified until they fit into such boxes.
Postman acknowledges that a certain amount of generalization or oversimplification is necessary for everyone, given that we are awash in information: the sorcerer’s apprentice, with only a broom against the flood. But in the past, some institution (familial society, religion, etc.) provided the framework for belief and understanding, dictating what was of greater or lesser importance. Technocracy unraveled that moral and intellectual coherence, and now such institutions, and such overarching structures of belief, are held in suspicion by the Technopoly-addled. What do they have instead? An incomprehensible universe, and an unending river of data sans context. Data management becomes the driving concern – again, not asking why this information or that must be preserved, but only caring how. “Information appears indiscriminately, directed at no one in particular, in enormous volume and at high speeds, and disconnected from theory, meaning, or purpose.”
So. Having been alarmed by the way in which society regards the universe as incoherent, the vicious cycle of bureaucracy, and blatant reductionism, what can we do?
Postman’s response – he admits that it’s not really a solution – is that, at an individual level, we must cling fast to the narratives and symbols which quicken us and organize our thought.
At a societal level, schools are probably the best arena for improvement. The curriculum therein tends to have some coherence and connectedness, and presents ideas or attitudes that can permeate “a person with no commitment, no point of view, but plenty of marketable skills.” Or so we hope. Since it’s unlikely that religion, love of country, or emotional health would be used to provide structure for students’ knowledge, something else must do so. Postman suggests “the ascent of man” – the idea that “humanity’s destiny is the discovery of knowledge.” The arts and humanities can be joined with science “to gain a unified understanding of nature and our place in it.” Instead of excising anything religious, a study of religious systems can (apparently) help tell “the story of humanity’s creativeness in trying to conquer loneliness, ignorance, and disorder.”
The sudden influx of quotations probably displays my feelings toward this approach: I can’t actually summarize it and keep a straight face. I agree that it’s valuable for our culture to have a nontechnical or noncommercial concept of education, but I don’t know that this approach to learning would be able to overwrite society’s years of emphasis on education as the means to achieve material or financial success; after so many years of people asking “How?” I don’t know how to convince everyone to ask “Why?” instead.
Postman also recommends teaching as much history as possible – not only the history of political events, or of each school subject, but of history itself. This, he hopes, can help illuminate why we know the things we know, whence our ideas and sensibilities issue, and how cultures change. He urges that different theories be propounded if not endorsed or established: “To teach the past simply as a chronicle of indisputable, fragmented, and concrete events is to replicate the bias of Technopoly, which largely denies our youth access to concepts/theories, providing only a stream of meaningless events.” Which has always been my problem with understanding history: why bother remembering distinct events if I don’t understand the point of them? Postman agrees with that: “The worst thing we can do is present [facts] devoid of coherence.” Rather, we should go beyond the event into larger concepts, theories and hypotheses, comparisons and evaluations.
For my own part, stuck in my unfashionable Christian beliefs and morality system, it’s clear that human-centered solutions cannot fill a spiritual pit. Technology cannot cure its own disease. Practical decisions cannot solve moral quandaries. There can be no experts in child-rearing and lovemaking and friend-making, because individual people are not problems to be solved. If the great danger is to become Adolf Eichmann – the Holocaust organizer who was indifferent to the fact that the timetables and logistics he oversaw were part of the deportation and killing of millions of people – then our defense is to care more about our actions and their consequences, especially the effects on our fellow man.
This is similar to Postman’s final conclusion: that to resist Technopoly, we must be loving resistance fighters. We must understand that technology is a product of a particular economic and political context; that all technology carries with it “a program, an agenda, and a philosophy that may or may not be life-enhancing;” and that all technology demands examination, judgment, and control.”
My corollary: Keeping an “a epistemological and psychic distance from any technology” requires an understanding of, and respect for, the dignity of the human soul. Distrust of technology will not change our society, our culture, our world so much as love for our fellow man.
June brought me books and peculiar poetry and Sherlock. July brought a good deal of travel for everyone. August has brought me a number of new things – a new house, new housemates, new pursuits – and one of those new experiences happened yesternight.
My roommate invited me to something called A2 New Tech Meetup. You might guess (reasonably) that it’s not my usual kind of event. I am not much of an Ann Arborite, I’m about as low-tech as it gets (where phones are concerned, anyway), and I don’t necessarily try to meet new people from the first two categories. When I reached the auditorium for this presentation, it took some chutzpah to walk normally and take a seat rather than flee from the alarming ratio of men to ladies. What am I doing here? I don’t have one of these fancy phones. I don’t use apps, much less develop them.
But I stayed, and my initial misgivings gave way to fascination with the projects these groups are working on. In the end, it was good to hear about five companies from The Tech Brewery.
My roommate works for one of them: a startup company called AppKey, which works to “combine the download scope of a free app with the revenue of a premium app.” Of all the people with Android phones, 1 percent will bother to pay 99 cents for “the full version” of an app; the other 99 percent will use the free version. This, of course, is their prerogative. But it leaves the developers wanting, since they do a fair amount of work to put out those free apps. What to do? Allow consumers a different ‘key’ to the premium app, namely, let them opt in to passive banner ads (rather than the annoyingly glittery advertisements that few willingly click). The advertisers get more ads out (and viewed), the developers get a portion of the revenue based on how often the keyed-premium app is used, and the consumers get premium apps without spending anything on them.
Then there was abriiz, the company with the app for phone-toting kids with asthma. If a child with severe asthma has different medications to take at various parts of the day, it can be easy to forget doses. The application issues a reminder at set times, tracks the doses taken, logs them in, and tracks them (showing a log toward some incentive the parent and child set up). The idea is that timely dosage-taking will prevent some of the severe asthma attacks such children suffer, and cut down on ER visits. That saves the insurers money, and everyone is happier…assuming the child doesn’t lie; they had to admit that the app couldn’t prevent that (though, as they pointed out, asthmatic kids have good reason not to skip around on such things).
DeepField came next; it works with internet service providers to track data in the Cloud. So many sites depend on other sites/suppliers/networks to remain in place. When Amazon’s Cloud had an outage in Virginia in June, a number of other sites and services went down with it. DeepField provides tools for those sites to see which suppliers they depend on. That gives internet providers more data about where their network (and the energy/money expended on it) goes, so their business decisions are more informed. I must admit that the project sounded intriguing, but I know very little about Cloud computing and therefore can’t get very in-depth. On one hand, it’s frustrating; on the other, it reminds me how much there is to learn.
Next up was QizBox, which is being developed by some folks at Bowling Green State University. Here’s the premise: so often, students sit in class and don’t want to raise their hand or otherwise participate. But give them an anonymous forum and boom: the developer’s seen a lot more participation in his classes, and the questions or answers given are all more trackable. There’s a section in the Box for note-taking and seeing what information/images/etc. the professor has uploaded. Students can give feedback about what’s giving them trouble, and they’re working to make it so students can encourage each other.
Last was the fellow discussing Proteân. The word means “versatile,” and the idea is to re-imagine the ubiquitous plastic card to allow for one versatile object. The average American has 6 cards – debit card, credit card, ID card, loyalty cards, gift cards, etc., etc. The Protean system takes what they call the Echo card, and it mimics all those different cards. Customers don’t have to root around in purses or wallets for that card they swear they had (I didn’t realize how annoying this was until they pointed it out to me); merchants get more information on customer spending and, theoretically, more sales (because customers will be better placed to take advantage of loyalty-based deals). They’ve done a great deal of work to make the Echo card secure.
All told, I’m glad I went. All I expected was to learn a bit more about what my roommate did, but the presentations (succinct as they were) helped me understand what sort of possibilities are out there in the ocean of technology. It’s fascinating to recognize that other people have looked for a way to make the world more efficient, have looked for and found problems to solve. Admittedly, I don’t always agree with the solutions. QizBox, for example, might engender students who are afraid to take a stand for what is right because they’re accustomed to anonymity. The data collection from AppKey and the Echo card might allow for uncomfortably well-targeted ad campaigns, raising all manner of privacy concerns (and, depending on how much self-control buyers have, budget worries). The Echo card might even prove more inconvenient because losing or breaking it means losing a whole lot of things at once.
Still, it’s exciting to learn about the approaching wave of technology. Even this old-phone user is intrigued by the projects discussed last night – the tip of the innovation iceberg.