Unanswered [4] | Urgent [0]

Home / Research Papers   % width Posts: 2

Essay about Deep Learning for Grad School Application

houdini2518 1 / -  
Oct 8, 2022   #1

Essay about Deep Learning for Grad School Application

Prompt: The scientific essay/paper, submitted along with the online application, should be approximately 1,000 words in length. Your scientific paper/essay should introduce the topic, discuss it, and lead to a logical conclusion. Please use relevant data and scientific literature to support your argumentation. Ideas that are not your own must be identified as such! The term "essay" can be misleading - we require a scientific paper with proper citation! Your sources of information must be listed at the end. The essay must be your own work and you should write it without any assistance.

Which new challenges for computer science arise from recent advances in deep learning, e.g., autonomous driving, Google's LaMDA, and Deepfakes?

Advances in artificial intelligence, along with a rapid increase in data, have revolutionized the world. It is now possible to have human-like conversations with machines, create synthetic music and literature, and obtain personalized media recommendations due to deep learning. However, these advents have also created problems that affect the society in which these systems operate, the security of the systems in which these algorithms are employed, and the privacy and autonomy of the users of these algorithms. The following paragraphs delineate some of these challenges.

Data reflects the society it stems from. Deep learning models can retain society's biases and create unintended discriminatory outcomes. For example, analogies generated by the popular word2vec embedding - which is widely used to train natural language processing models - often associates the words homemaker, nurse, and receptionist with women. In contrast, words like boss and surgeon are associated with men.[1] Apart from gender, these biases can also extend to demography, race, and creed. There is an inherent racial bias in many datasets; tweets written in African American English are classified as hate speech significantly more frequently than those written in Standard American English.[2] In addition to algorithms that train on textual data, facial-recognition systems also tend to wrongly categorize people with dark skin since they are underrepresented in datasets, which are dominated by pictures of lighter-skinned people.[3] When these deep learning algorithms are commercialized or employed by government authorities, these biases can transform into policy issues that directly impact people. A real-world example of such adverse social outcomes can be seen through predictive policing. In 2015, a predictive policing application called the Crime Mapping, Analytics and Predictive System (CMAPS) was adopted by the police forces in the national capital of India, Delhi. This tool uses machine learning to identify spatial crime hotspots in the city through data from Dial 100 emergency calls. However, this tool is biased towards underprivileged areas - which are often inhabited by people belonging to oppressed castes and communities - due to less engagement of emergency calls from posher areas.[4] This can lead to confirmation bias in the police staff, higher scrutiny towards, and over-policing in underprivileged areas. Thus, the same tools that promise to make accurate and neutral decisions mirror the subjectivity and prejudice of society instead. Overcoming such biases in data and building systems that provide equitable outcomes stands as the first challenge towards the responsible use of deep learning. A fairer use of deep learning could include steps to preprocess training data to mitigate biases and create fairer and more transparent algorithms.[5]

A major threat can be imposed if deep learning algorithms are vulnerable to external attacks. The output of neural networks can be tampered with by contaminating the training data. For example, if physically accessible sensors in autonomous driving cars are compromised, the lives of passengers can be at risk.[6] Moreover, techniques like model inversion can be used to infer private input data from a model. This type of attack can be particularly malicious in the healthcare sector wherein the privacy of patients can be at risk by the exposure of sensitive information like their genomic data or clinical history. [7] A rapid increase in data from all sectors and an upsurge in the use of deep learning models pose a direct threat to the privacy and safety of users. Various strategies have been proposed by researchers to increase security against such attacks. By modifying training data through methods like adversarial training or data randomization, modifying deep learning models to increase robustness, and using external tools for security, algorithms can be made safer and more powerful.[8]

Another challenge that arises from deep learning is the intention with which these algorithms are deployed. Deep learning is being used to attract people's attention and influence their decisions. This can create a positive impact on lives, for instance, by nudging them to make healthier or more environment-friendly decisions. However, they can also be misused to create a negative societal impact. For example, AI-enabled manipulation disrupts legal institutions and democracy by surveilling the online conduct of citizens and tailoring its influence on citizens, thus shaping their opinions.[9] This phenomenon has been most notoriously observed in the Cambridge Analytica scandal wherein the personal data of millions of users was collected by a consulting firm, Cambridge Analytica, and used for political advertising. This data was notably used to assist Donald Trump's 2016 presidential campaign and influence the outcome of the election. Coercive algorithms could also lead to a detriment in individual judgement and decision-making skills since intelligent systems would nudge people to make certain decisions. Many people behaving similarly by being manipulated into making similar decisions would lead to a "diversity collapse".[10] Innovation, economic development, and individual happiness will be adversely impacted by a loss of diverse perspectives and individuality. Hiring morally responsible and diverse team members and raising awareness about the importance of ethical implementations of artificial intelligence are the first steps towards building a society in which deep learning is used for the betterment of society.[11]

Advances in deep learning are a double-edged sword. Their benefits are unquestionable, and the fast-paced developments are unlikely to slow down soon. However, they also pose enormous risks to society. Ethical and secure applications of AI will help create a society in which humans can safely interact with technology and have the autonomy to make their own decisions.

[1] Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.

[2] Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial bias in hate speech and abusive language detection datasets. arXiv preprint arXiv:1905.12516.
[3] Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
[4] Marda, V., & Narayan, S. (2020, January). Data in New Delhi's predictive policing system. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 317-324).

[5] Fu, R., Huang, Y., & Singh, P. V. (2020). Artificial intelligence and algorithmic bias: Source, detection, mitigation, and implications. In Pushing the Boundaries: Frontiers in Impactful OR/OM Research (pp. 39-63). INFORMS.

[6] Nassi, B., Mirsky, Y., Nassi, D., Ben-Netanel, R., Drokin, O., & Elovici, Y. (2020, October). Phantom of the ADAS: Securing advanced driver-assistance systems from split-second phantom attacks. In Proceedings of the 2020 ACM SIGSAC conference on computer and communications security (pp. 293-308).

[7] Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., & Ristenpart, T. (2014). Privacy in pharmacogenetics: An {End-to-End} case study of personalized warfarin dosing. In 23rd USENIX Security Symposium (USENIX Security 14) (pp. 17-32).

[8] Qiu, S., Liu, Q., Zhou, S., & Wu, C. (2019). Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences, 9(5), 909.
[9] Serbanescu, C. (2021). Why Does Artificial Intelligence Challenge Democracy? A Critical Analysis of the Nature of the Challenges Posed by AI-Enabled Manipulation. A Critical Analysis of the Nature of the Challenges Posed by AI-Enabled Manipulation (august 4, 2021). Serbanescu, C.," Why Does Artificial Intelligence Challenge Democracy, 105-128.

[10] Helbing, D. (2019). Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies. In: Helbing, D. (eds) Towards Digital Enlightenment. Springer, Cham. doi.org/10.1007/978-3-319-90869-4_6

[11] Simons, D. (2019). Design for fairness in AI: Cooking a fair AI Dish.
Holt  Educational Consultant - / 13,887 4564  
Oct 9, 2022   #2
There seems to be 2 thesis statement paragraphs in this essay when only one is needed. The first paragraph is an unnecessary introduction that does not really help to focus on the actual topic chosen for this paper. So the first paragraph, in my opinion, can be omitted in favor of the 2nd paragraph. The second paragraph is more effective in presenting the focus of the discussion, why it is important, and how the program can be improved. This is the main consideration point of the presentation and what will be the focal point of the reviewer's consideration.

The US elections and Cambridge Analytica could be optional for this presentation since the previous establishing point is based on crime incidents in India. Suddenly focusing on the US elections could lead to a confusing discussion for the reader. The presentation becomes weaker because of the sudden topic change. Maintain the cohesiveness of the discussion throughout the presentation. That will also help in the creation of a better and solid solution presentation.

Home / Research Papers / Essay about Deep Learning for Grad School Application