To what extent must rationality persevere in order to distinguish itself from faith? Is a seminal intention sufficient, or must rationality be subject to its own self critique at any stage in order to remain a faithful non-believer?
I don’t think about faith or religion much because the manifestation of organized religion isn’t a predominant part of my life or community. I live a secular life and for the most part the people around me have agreed to participate in this secular experience; we don’t talk about faith in a higher being or an organized religion with the exception of when those two things intersect with politics, e.g. when religious fanatics try to swing the political debate into the influence of the Bible or when Islamophobia rears its head. More subtly, faith is present as a characteristic of a culture, usually as the explanation a friend or colleague’s personal formation–he grew up Jewish, she grew up Catholic, etc. So my musing about rationality in comparison to faith is a question about the status of these differing thought practices and points more toward unorganized religion, rather than the elements of institutionalized belief practices. My concern about these two paths of thinking about the world is based on the semblance of secular thinking and suppositions I see today, one that is rooted in the reliance on consumer technological advances and salvation through technology, rather than a belief that in the All Might. However, I’m not attempting a history of automatons or avatars.Only when necessary will I distinguish what I mean when referring to “technology.” Rather, I’m aiming to write a marriage certificate of our society’s preference to violent domination, ecological suicide and the life preservers believe are in store.
Here’s my working baseline for what is rational: making decisions based on testable truths that may cultivate an expected, desired end, which is somehow beneficial to the decision maker. This isn’t intended to be a watertight definition, but enough to get off the ground in comparing ‘rational’ with ‘faith.’ Really, the crux of the distinction pertains to the testability of the information that motivates the decision, i.e. whether it’s a theory that can be proven or disproven. Rationality, in this sense, in a blending of logic in the philosophical or mathematical sense and elements the scientific method. There is also the anthropocentric element that what is rational can be conveyed and understood by other people.
Faith is based on an idea that can’t be proven nor disproven. Inherently, it’s something that could be suspect but isn’t; it’s accepted in trusting and explanatory in nature.
Still, very quickly from the point of defining ‘faith’ and ‘rational’ the meager gap allotted almost vanishes immediately. The distance diminishes both within and between terms. Between terms, I’m wondering whether I should have faith in the model of faith or if I should I have faith in the model of rationality. Within the model of rationality, the foundations on which any testable information is built becomes a faith for heuristic purposes. For example, I’m not an engineer, but I have faith that when I turn the key of a cars ignition, the system of engineering principles are going to lead to combustion and I’ll be able to drive away.
In both working definitions I’m avoiding getting into the accuracy of these terms; I’m avoiding Oxford’s Dictionary of Philosophy and I’m avoiding choosing which standard or author to select as the expert of these because the definition, precise or familiar is not secondary but tertiary to what the argument is about. Common, everyday understanding (a rational understanding!) of these two types of thinking is sufficient. But also, this diminution plays into the third term as well as what it is I’m critiquing.
The definition of ‘technology’ too eludes accuracy and could easily lead me on a detour, but fortunately that eliding in meaning resembles the program at which I’m taking aim. I’m referring not only to the superficial consumer technologies like handheld devices that roll out every six months by Apple. The exact object or technology is irrelevant in this argument and in definition. Instead, I’m referring to the networks of intentions from the belief that there is a good in the pursuit and creation of knowledge toward an ends, the production of that (by)product, the transmission of its utility and means to others, and the acceptance of that message of significance by others in the form that an advancement has been made, is relevant and, at times, warrants compensation or acquisition. The specificity of what is produced is as irrelevant in this scenario as the product that the subsequent product replaces, making the former version irrelevant to the consumer (and technology obsolete to the previous researcher), or definition of technology outdated. It’s the message of irrelevance that convinces the consumer that there is a need for a new model or replacement and undermines the philosopher concerned with defining technology before arriving to a useful critique of it.
Here’s one of my favorite examples of this obsolescence because it’s so fucking brashly absurd and overt: This year we’re persuaded to by a phone because the screen is small. Next year, we should update because they’re smaller and more portable; it fits right in your pocket! Last year, we get screens that are curved; this year, the flatter the better.
Technological advances are predicated on disavowing a previous technology or version in some degree and claiming the newer version superior. This is basically the game of obsolescence. I first identified it in 1995 when my Nintendo Entertainment System and game collection was held inferior to Super Nintendo Entertainment System, whose many games were the exact same fucking title but had more curves in the images. (I bought a Super NES but jumped ship on the video game world shortly thereafter, disavowing the vicious cycle of training your ten digits to twitch in a certain way…[was piano the first game system?].)
This irrationality is ubiquitous to the consumer technology market, but also other markets.It’s tempting to expand the critique to media in general; a medium inherently supplants something else. The Roman popularity of glass was due to its ability to mimic other media, such as ceramics and metals. America’s love for polymers was and is its capacity to mimic molded metals, themselves surrogates for cast objects or hand-shaped forms. But is that supplanting necessary? And it’s not just the media that are irrelevantly supplanted, the products are also supplanted, creating micro fractures within a technology. That is, a medium functions by both being superior to another medium, creating a possible route for its own improvement, and out-dating previous iterations. If you look at a 6 megapixel digital image taken in 2005 and compare it to a wood cut print from the 14th century, at one scale the digital image looks much better than the print. But once you scale up the competitors, the wood block maintains its detail more like a vector image. The comparison may sound absurd when you’re thinking of your favorite portrait as a vector graphic rather than a pixel-based bitmap, but my point is proven in the ongoing arms race between camera producers. Prior to DSLR cameras, film manufacturers addressed the same issue of scalability. Kodachrome’s grainless texture allowed for larger scaling. But now people returning to 35mm or analog do so largely for the texture that was lost, intentionally. Predominantly, every three months we’re taunted by a new, larger pixel sensor that promises more realistic images at great sizes. There’s no comparable vector race. This is consumer relation to technology, and it’s irrational at worst and explicable only in that we believe the inferiority that manufacturers and advertiser are selling to us about our current state.
But I’m not simply critiquing consumerism of technology. There are consumer habits that aren’t subject to this irrationality. It’s also not specifically a capitalist issue, although many companies clearly rely on this irrational behavior; theoretically this religion, this dogma, could exist via state-sponsored technological advances. It could exist in communist arrangements. But in the capitalist, consumer context, this irrationality is often advocated by a disguise of rational comparisons, rather than sheer obsession as it would in a society without external parties who gain from our irrationality. Advertiser bombard us with presumably superior numbers that make older models look too small, too big, too slow or too fast. Last year was 10″, this year it’s 100″. While the rational comparison is overt–the numbers and measurements are clear and true–the irrational motivation is hidden but requisite.
But staying with consumer technologies: Rationality and Faith enter stage left.
The rupture of reason and rationality in this scenario of technology that it leads to the dogmatic behavior that is so similar to organized religious thought, when compared to science, that it makes me wonder not only whether ‘technology’ is our contemporary global religion, but whether dogmatic, irrational behavior is inherently human. Around the world people simultaneously identify the ills that a society ripe with consumers technologies issues but the attraction to it and recapitulation of it is pantomime. The places of the world that manufacture our consumer tech knows personally the dangers and destruction the production of this society entails; neighboring countries and towns tell the tales of landfills and toxic dumps. Barges of tech waste competes for open waters against barges carrying virtually the same techno trash inbound, only wrapped and in time for Black Friday.
The odd behavior metastasizes into illogical when we hope that technology’s problems will be solved by more advanced technologies. (And this is it’s useful to think of technology in the broad sense, not just iterations of a technology and not just consumerism, but existing at least since the Greek’s termed ‘techne,’ referring to craft, art, or construction that follows techniques, i.e. a sequence of “technes” or principles of making. It’s a knowledge that is motivated toward something and built upon another, previous construction that includes principles and orders of operation.) And here’s where I’m deviating from the Luddites of the 19th Century who warned basically the same: I’m not anti-knowledge or anti-any-single technological iteration, but the core questions for technology should be examined at the root of techne: What is the intention? And, if favoring rationality is important, at what means will that intention be achieved? What’s the cost of this technology? What’s the cost of a battery-powered battery-replacer? Electronic scissors? Ironically almost all “As Seen On TV” consumer goods can be summarized into the aesthete of the very technology on which it was advertised, whose audience championed the first time that the riches society was proportionally the least healthy, whose education collective journey to learn about the technological advancements that led to a longer life was cut shorter by while waiting for the paid program to resume after this commercial break.
Not only is a dogmatic faith in technological progress illogical, it leads to a contradiction. The argument is something like this: If technology is useful, that is it fulfills a use that is intended, then it’s a good thing. If technology creates some bad byproducts, like extinction, pollution or climate change, that are unintended, then it’s a bad thing. If the intended use of technology is to reverse the bad, unintended byproducts of another, prior technology, then the second technology is, at least in part, a good thing and the first technology was no more than a part a good thing. (So far, we’ve got the argument for “green technology” as well as Plato’s regress: a technology will always be fixed by a subsequent technology, ad infinitum.) If a technology is created that is irreversible and concludes in the extinction of human life, then it’s a bad thing. (This is the argument/concern for AI’s singularity: AI will infinitely evolve more intelligently and lead to human extinction). But if a technology isn’t created to reverse the negative, unintended byproducts of a technology, then the absence of that at least in part, a bad thing, which can be remediated only by a technology that is good, in part. Fin.
Not only is the faith in technology irrational and illogical, it clouds social and legal avenues to solutions for problems that are haunting our society. As Richard Levins mentions in Living the Eleventh Thesis, “The hundreds of environmental justice groups that noted that toxic waste dumps were concentrated in black and Latino neighborhoods…insisting on the environmental causes of cancer and other infectious diseases while the university laboratories are looking for guilty genes.” (Tactical Biopolitics, da Costa and Philip, MIT 2008, p. 30). Levins essay is advocating for an ethically directed form of scientific research, which I agree is possible. In the passage I’m quoting he’s advocating taking the problem out of the genetic context and put it into the legal context.
Another example of how our techologism can baffle a simple solution is in the case of police body cameras. I had the privilege of listening to the debate between Data & Society’s danah boyd [sic] and Jay Stanley of the ACLU. boyd’s argument is the body cam surge is financially motivated, that camera footage can be doctored by police and/or police can learn to edit what’s in the frame by directing their camera and other tactics. In short, the promise of body camera’s isn’t the solution to end the social and legal injustices that are presumably vulnerable (i.e. police corruption, willful killing without judicial process). Stanley’s argument is that, with the right legal structure police body cameras could function a police watchmen. And in his defense, North Carolina did release footage that incriminated a police officer, but shortly thereafter created a law that made the footage unavailable without a court order. There are states that seem to follow this path, but unfortunately, what’s much more prevalent is the absence of Stanley’s dream legislation and a closer semblance to the prediction of boyd: body cameras aren’t society’s savior against illegal police action, they’re tools to support it at worst and evidence in absentia very too often. Furthering unjust legal barricades, states like North Carolina are making that footage accessible only in prosecution against offenders, although citizen tax dollars paid for the equipment. What’s more, is even with video footage, there are legal barricades that perpetuate injustice, regardless of video footage existing or being seen by the jury. We can look at the myriad of citizen footage that doesn’t conclude in due process, like the viral video of Eric Garner being strangled by a mob of NYPD; why would the (grand) jury care if the footage is from a police body camera or a citizen smart phone? But instead of dismantling these unethical legal obstacle courses, our society is in a frenzy to solve the problem with technology and each year American cities are spending tens of millions of dollars on this techno-trash. Is it that of restructuring legal processes, community justice programs or community-oriented police training isn’t sexy enough?
Technology. Consumerism. How about the illusion of progress? There are real, beneficial, rational, logical advances that we’ve made as a society. But that doesn’t conclude that every problem we have can be solved by trudging further down a road of microchipery. Sometimes it’s more rational to cut your loses, annotate failures and map a cul-de-sac as not a through road. How about just stopping? Take a nap. Rest. Wait it out.
We really believe that “technology” will solve all our problems, even and especially global warming. I’ve seen so many well intended grants that hope for the right idea to pilot. While noble, these idea-cultivators don’t address the origins, systematic or societal, that have led to the current problem: industrial productivity and continued consumption. How about a grant for someone to just stop buying shit for a year? What would be the cost to paying people not to buy a new phone for 24 months?
If we stagger more closely to the society that’s being critiqued, we see irrational belief expresses itself differently but cogently irrational at different levels. Within the echelons of the technology industry there are funders who sponsor students and research scientist to answer questions vertically deemed relevant, aided by hardware and software designed by programmer or companies. At the apex of many of these industries individuals express their concern that what they are doing may have negative consequences. The esteem many of them have accrued touts the severity of the consequences described. The Future of Life Institute focuses on the threat of artificial intelligence, climate change, nuclear war and biotech. FLI attempts to self-regulate the scientific world, to offer an Ethics 101 course to the lab rats that may have missed out on the humanities while cramming for O Chem. The topics that FLI are espousing certainly things to worry about, yet within all of these preoccupations there’s a neglect for the historical trend of the applied sciences and the structural disenfranchisement of the social groups of people, a trend that’s been the mainstay of technology when taken outside of the institute and seen in daylight. The socioeconomic question, specifically that the consequences of technological “advancement” when reaching the work place almost always impacts women, racial and gendered minorities more than white males suggests that FLI might consider not only topics within science but ask how is “advancement” considered, measured, and dispersed. Is the industry of science, particularly as it interfaces with consumer technology, structured in a way that makes rational researchers deep in the scientific method output real world solutions that get distorted into irrational product trends?
The structural discrimination of tech is particularly acute given that these forms of labor–often unskilled and uneducated movements of the body–are less easily replaced by the forms of computers and programming than those in socially-advantaged positions. The abacus, the surrogate mathematician, was created before the Industrial Revolution replaced the metal smith. In today’s context, lawyers and computer programmers should be the first replaced by algorithms. Law, when well written, is basically a set of algorithms and deviation from such is usually due a socioeconomic bias. Wouldn’t it be easier to program a program to program other programs rather than make a program that runs a robot that articulates the movements of sweeping a room? If this question is rephrased to fit into FoL’s topic committee it would be something like: Who will be the first to die in the fall of Ai? Will it be the programmer god or the underlings the employer must pay? Is there any end to this frenzy of computer science programs, jobs and applications, is it asymptotic or fated for creating its own collapse?
Consider this video in which vloggers are attempting to exploit the coming surrogate virtual vloggers in order to garner followers as the recursive conclusion to this argument.
Tactical Biopolitics, Beatriz da Costa and Kativa Philip, MIT Press, 2008. pp 30
"Police Body-worn Cameras," Alexandra Mateescu, Rosenblat, and boyd, Data & Society Research Institute, February 2015
Accessed May 29, 2017
"Police Body-Mounted Cameras: With Right Policies in Place, a Win for All," Jay Stanley, ACLU, March 2105
Accessed May 29, 2017
"In North Carolina, body camera footage is no longer public information," Jack Smith IV, Mic, October 3, 2016
Accessed May 29, 2017
"Justice Department to give $20 million to body cameras, " WHEC News, September 27, 2016
Accessed May 29, 2017
Future of Life Institute web page
Accessed May 29, 2017
"Are Virtual Vloggers the End of YouTube?" Good Mythical Morning vlog, S11 E60, April 5, 107
Accessed May 29, 2017