Essay about Deep Learning for Grad School Application
Advances in artificial intelligence, along with a rapid increase in data, have revolutionized the world. It is now possible to have human-like conversations with machines, create synthetic music and literature, and obtain personalized media recommendations due to deep learning. However, these advents have also created problems that affect the society in which these systems operate, the security of the systems in which these algorithms are employed, and the privacy and autonomy of the users of these algorithms. The following paragraphs delineate some of these challenges.
Data reflects the society it stems from. Deep learning models can retain society's biases and create unintended discriminatory outcomes. For example, analogies generated by the popular word2vec embedding - which is widely used to train natural language processing models - often associates the words homemaker, nurse, and receptionist with women. In contrast, words like boss and surgeon are associated with men.[1] Apart from gender, these biases can also extend to demography, race, and creed. There is an inherent racial bias in many datasets; tweets written in African American English are classified as hate speech significantly more frequently than those written in Standard American English.[2] In addition to algorithms that train on textual data, facial-recognition systems also tend to wrongly categorize people with dark skin since they are underrepresented in datasets, which are dominated by pictures of lighter-skinned people.[3] When these deep learning algorithms are commercialized or employed by government authorities, these biases can transform into policy issues that directly impact people. A real-world example of such adverse social outcomes can be seen through predictive policing. In 2015, a predictive policing application called the Crime Mapping, Analytics and Predictive System (CMAPS) was adopted by the police forces in the national capital of India, Delhi. This tool uses machine learning to identify spatial crime hotspots in the city through data from Dial 100 emergency calls. However, this tool is biased towards underprivileged areas - which are often inhabited by people belonging to oppressed castes and communities - due to less engagement of emergency calls from posher areas.[4] This can lead to confirmation bias in the police staff, higher scrutiny towards, and over-policing in underprivileged areas. Thus, the same tools that promise to make accurate and neutral decisions mirror the subjectivity and prejudice of society instead. Overcoming such biases in data and building systems that provide equitable outcomes stands as the first challenge towards the responsible use of deep learning. A fairer use of deep learning could include steps to preprocess training data to mitigate biases and create fairer and more transparent algorithms.[5]
A major threat can be imposed if deep learning algorithms are vulnerable to external attacks. The output of neural networks can be tampered with by contaminating the training data. For example, if physically accessible sensors in autonomous driving cars are compromised, the lives of passengers can be at risk.[6] Moreover, techniques like model inversion can be used to infer private input data from a model. This type of attack can be particularly malicious in the healthcare sector wherein the privacy of patients can be at risk by the exposure of sensitive information like their genomic data or clinical history. [7] A rapid increase in data from all sectors and an upsurge in the use of deep learning models pose a direct threat to the privacy and safety of users. Various strategies have been proposed by researchers to increase security against such attacks. By modifying training data through methods like adversarial training or data randomization, modifying deep learning models to increase robustness, and using external tools for security, algorithms can be made safer and more powerful.[8]
Another challenge that arises from deep learning is the intention with which these algorithms are deployed. Deep learning is being used to attract people's attention and influence their decisions. This can create a positive impact on lives, for instance, by nudging them to make healthier or more environment-friendly decisions. However, they can also be misused to create a negative societal impact. For example, AI-enabled manipulation disrupts legal institutions and democracy by surveilling the online conduct of citizens and tailoring its influence on citizens, thus shaping their opinions.[9] This phenomenon has been most notoriously observed in the Cambridge Analytica scandal wherein the personal data of millions of users was collected by a consulting firm, Cambridge Analytica, and used for political advertising. This data was notably used to assist Donald Trump's 2016 presidential campaign and influence the outcome of the election. Coercive algorithms could also lead to a detriment in individual judgement and decision-making skills since intelligent systems would nudge people to make certain decisions. Many people behaving similarly by being manipulated into making similar decisions would lead to a "diversity collapse".[10] Innovation, economic development, and individual happiness will be adversely impacted by a loss of diverse perspectives and individuality. Hiring morally responsible and diverse team members and raising awareness about the importance of ethical implementations of artificial intelligence are the first steps towards building a society in which deep learning is used for the betterment of society.[11]
Advances in deep learning are a double-edged sword. Their benefits are unquestionable, and the fast-paced developments are unlikely to slow down soon. However, they also pose enormous risks to society. Ethical and secure applications of AI will help create a society in which humans can safely interact with technology and have the autonomy to make their own decisions.