We are all biased

For the past three months or so I've been learning the basics of artificial intelligence(AI) from an online course. It's a field that dominates the frontier of knowledge of our times- initially I refused to dive into the field, despite the noise and razzmatazz being made about recent progress in AI(particularly in chatbots such as ChatGPT and Google's Bard), as I felt like the field was still too raw, too young in its development for me to gain any significantly applicable domain knowledge in. However my first experience with ChatGPT, in which I witnessed a wonderful deconstruction of a software puzzle I was struggling with, changed my perspective quickly, and before long my journey into this fascinating field began.
 
One of the branches of AI is Data Science, in which insights are derived from huge quantities of data, which may be of benefit to the user. For example, if you own a grocery, knowing which combinations of food items are most commonly purchased from your store could help you manipulate a customer's shopping experience- if you notice bread and milk being a fairly common combination, for example, then having bread and milk in opposite ends of the store would mean that anyone purchasing this combination of foods would have to walk the length of the store at least once, thereby increasing exposure to other food items in the store, making it more probable that your customers will pick up an extra item or two along the way. You probably expose yourself to a data-driven algorithm almost everyday, especially in content-based media applications(think Spotify's and Netflix's recommendation systems). Given how we are moving rapidly towards a data-driven society(90% of the total data in the world was generated in the last two years), the relevance of this field cannot be overstated enough. Despite these great advances, a small problem kept popping up in almost every data related problem I worked with, and that was the problem of overfitting. 

 To understand what overfitting is, let me walk you through an example. Think of the example above, where you were the owner of a grocery, except now you own not just one store, but an entire chain of, say, 10 stores spread out across the United States. Now, suppose you analyzed the data of a store that happened to be in a location with a large Muslim population. What kinds of insights would you receive from this data? Let's say that you gain certain insights about your customer's behavior- lamb and buttermilk, pita and hummus, and garlic and cassia were often bought together, where as the pork in the meat section would almost always go untouched. These insights make sense for a region with a heavy Muslim population, and with a bit of tweaking, you could make some changes to this grocery store(reducing the amount of pork stored, for example) in order to optimize your finances. However, would it make sense to apply these same changes to your other nine chains, spread out over the country? Of course not! Your data model has only been exposed to the data of a certain region, and therefore the insights generated would be heavily influenced by the characteristics of that region itself. In order to make a more accurate model, you would either generate a separate model for each of the ten regions, or create a large model that averages out the values over the ten regions- using a model that is only trained in one region would fail to be useful(and might cause serious problems) for the grocery business. As I began to absorb this concept and really think it through, a thought began to tickle my brain softly: was my brain, the mushy organ contained in my head, an overfitted AI model? If you start to think about it, it makes a lot of sense. 

 My early exposures with Twitter gave me some insight into human nature, and one of those insights is that we interpret things based on the information we receive. It's a common sight to see right leaning and left leaning users interpret the exact same posts in wildly different ways, leading to anger and confusion on both sides. Twitter's algorithm is designed to make these interactions more likely to happen. A right leaning user will have a timeline curated for their own interests, and the more time spent on the timeline, the more the user starts to notice patterns among certain issues; in other words, their brain overtime becomes overfitted with a certain ideology, and everything they see begins to be processed by that overfitted brain. The ultimate reality distortion occurs when that user crosses paths with a left leaning user who has similarly had their brain overfitted by a different ideology- an angry encounter is inevitable, leading to unpleasant outcomes which could have been easily avoidable with better awareness. But it's not just in Twitter where real-life overfitting takes place. 

 Our brain takes shortcuts often, and for good reason. Shortcuts of the brain are why you can start and drive your car almost subconsciously, instead of having to go through the tasks as tedious as you did the first time you drove- your brain has created deeper and deeper neural paths each time you went for a drive until it became an automatic process, thus requiring less energy and processing power. However not all shortcuts serve us well. A bad experience with with a certain group of people may cause you to avoid those groups- you are not doing this out of any sinister intentions, but simply because your brain associates those groups with danger. We all have biases and judgements against others which don't serve ourselves or those people well, instead limiting potentially beneficial opportunities and relationships. A peer group's negative opinions on a film or piece of music may dissuade you from ever experiencing it, despite it potentially being a favorite for you. The problem with our biases is that it can dictate your entire experience of life. A mental framework that has been set to negatively interpreting events and occurences will subsequently color your experience of life in a very negative way- this is a common issue with many depressed and anxious people, whose mental framework sets them up for perpetual disappointment. It reminds me of the book Snow Crash by Neal Stephenson. One of the characters in the novel enters a virtual Metaverse, in which the character Da5id views a datafile in the form of a bitmap image(binary 1s and 0s). Da5id is an avid hacker and is well adjusted to interpreting binary code, and that enables the binary code, which is actually a virus in conjugation with the ancient Sumerian ur-language, to manipulate the brain functions of Da5id by traveling through the neural pathways in his brain that had been etched through his regular interaction with binary code- thus granting an external individual complete control over his brain. It's eerily similar to the way in which we ourselves become brainwashed(and even controlled) with the biased inputs that are fed into us daily, whether through the cable, social media, peer groups, or family.

 So how do we approach this issue? The first step is to bring awareness to the fact that your brain may be lying to you. More often than not it can jump to scary and illogical conclusions, and in those moments it's important to pause and reevaluate. It's not always necessary to know the answer; sometimes, the acknowledgement of ones ignorance goes is a far more valuable tool than a biased opinion. It's also important to approach any form of opinionated content with a slight skepticism- unless you only read scientific papers and journals all day, most of what you consume is going to have personal biases and emotionally appealing arguments, which may sound very convincing, but can still very much be very false. You must think before you even think- notice in which direction you seem to jump to early conclusions about things, and therein your biases will lie. You must also realize that you will never have complete information about your topic; you must acknowledge your information gaps, and allow for some space for your ignorance. It is also key to limit the amount of information you consume; this may seem counterintuitive, but the more information we keep consuming, the more our inner biases feel validated, and more and more we start to slip into a general thinking pattern without even realizing it! It is important to recognise our cognitive biases and the way it affects our day-to-day, and I hope this article has helped with that!

Comments

Popular Posts