Philosopher Evan Selinger worries that improved predictive technology “could transform us into what I call personalised cliches”:
[B]y encouraging us not to think too deeply about our words, predictive technology may subtly change how we interact with one another. As communication becomes less of an intentional act, we give others more algorithm and less of ourselves. This is why I argued in Wired last year that automation can be bad for us; it can stop us thinking.
When predictive technology learns how we communicate, finds patterns specific to what we’re inclined to say, and drills down into the essence of our idiosyncrasies, the result is incessantly generated boilerplate. As the artist Salvador Dali famously quipped:
“The first man to compare the cheeks of a young woman to a rose was obviously a poet; the first to repeat it was possibly an idiot.” Yet here, the repetition is of ourselves. When algorithms study our conscientious communication and subsequently repeat us back to ourselves, they don’t identify the point at which recycling becomes degrading and one-dimensional. (And perversely, frequency of word use seems likely to be given positive weight when algorithms calculate relevance.)
Of course, we have already experienced techno-panic about texting before. Early worries about texting focused on whether conversing by shorthand would weaken our linguistic abilities, and it turned out to be unfounded. The next stage of autocorrect, I would argue, is different. Predictive texting won’t make our abilities atrophy, but if we over-used a highly functioning version of it, it’s possible that we submit to something dehumanising.