I have written a motivation letter for my application for the following PhD advertisement with following description:
We are seeking a highly motivated Doctoral Candidate (DC) to join our research team as part of a prestigious Marie Skłodowska-Curie Actions (MSCA) Doctoral Network (DN) project: AlignAI. The AlignAI Doctoral Network aims to train doctoral candidates in the interdisciplinary Large Language Model (LLM) research field. It focuses on aligning these models with human values to ensure their development and deployment are ethically sound and socially beneficial. By integrating expertise from social sciences, humanities, and technical disciplines, the project will address critical issues such as explainability and fairness, ensuring LLMs contribute positively to education, mental health, and news consumption.
The doctoral candidate will focus on personas and prompt templates for aligned LLM companions, investigating how to identify user personas and on how to craft LLM companions that align to values and preferences. The successful candidate will have the opportunity to work closely with leading experts in the field.
Letter:
-----------------------------------------------
Dear Admissions Committee,
I am Vishal, a postgraduate from India with a strong passion for human-AI collaboration and research experience in Artificial Intelligence, particularly at the intersection of creativity, education, and societal impact. I am excited to apply for the Doctoral Candidate position under Prof. Andrea Cavallaro's group at IDIAP, Switzerland, as part of the AlignAI project.
My fascination with AI began through video games, where non-player characters (NPCs) seemed to react dynamically to user actions. However, as I delved deeper into neural networks and machine learning, I realized these NPCs were far less intelligent than I had imagined. This realization inspired a lifelong goal: to create AI agents or avatars with true, consistent personalities capable of meaningful interactions.
My journey into research began with a rewarding internship under Dr. Kazjon Grace at the Design with AI Lab, University of Sydney. There, I developed Twyptids, a human-AI collaborative sketching tool emphasizing real-time creative engagement. This experience nurtured my interest in AI systems that augment human creativity. Building on this foundation, my master's thesis focused on developing computational tools for deciphering and attributing ancient Tamil inscriptions, blending AI capabilities with social and historical contexts.
My entrepreneurial endeavors also reflect my commitment to impactful agents. I co-founded an ed-tech startup focused on developing serious games for education, leveraging gamification to enhance learning experiences for school children. This effort deepened my understanding of user engagement and psychology, essential for designing aligned LLM companions.
The AlignAI project's interdisciplinary goals align deeply with my aspirations, particularly in ensuring LLM companions remain consistent with user-defined personas across diverse domains such as education, mental health, and online news consumption. I am especially drawn to Prof. Andrea Cavallaro's focus on improving persona adherence in LLM agents, a foundational requirement across all applications. I'm also very much interested in collaborating with teams of Prof. Stephan Wensveen and Prof. Sneha Das on LLM agents for education.
One of the key challenges with LLM agents is their occasional inability to maintain persona consistency. Drawing from the literature, I propose several approaches to address this issue:
1.Narrative-Based Imprinting: This approach involves feeding stories about an agent's origin and motivations back into the system. The paper "Character-LLM: A Trainable Agent for Role-Playing" adopts this method. However, a notable limitation is that the avatars created by Character-LLMs are primarily based on historical figures whose backstories are already well-documented. The authors successfully extrapolated events from the lives of these historical figures and embedded those memories into LLMs. When creating a custom agent, though, it becomes necessary to develop an entirely new backstory. Drawing on my novel-writing experience, I can design a systematic template for characters and generate rich backstories and life events using well-established narrative frameworks, such as Freytag's Pyramid and the Hero's Journey.
2.Psychological Modelling: This approach involves imprinting psychological parameters into an LLM's memory, such as Myers-Briggs Type or OCEAN traits. The paper "PersonaLLM: Investigating the Ability of Large Language Models to Express Big Five Personality Traits" explores this method. Additionally, we could leverage established stereotypes, such as assigning astrological signs, to create distinct personas. Another possibility is delving into psychological studies to establish strict behavioral rules for LLMs, ensuring they remain in character. This method focuses on enabling LLMs to emulate a character rather than truly creating one, as depicted in the movie Ex Machina. My background in sociology, gained during my work on an ed-tech startup, could contribute valuable insights to this process.
3.Mechanistic Interpretability and Pruning: During my research fellowship at BITS Pilani, I conceptualized a method to identify and neutralize parameters responsible for out-of-character actions using mechanistic interpretability. By fine-tuning or pruning these parameters, we can achieve more consistent persona adherence. I can work on this idea along with the other Doctoral Candidate at IDIAP for LLM robustness.
The prospect of contributing to foundational research in persona alignment, applicable across multiple domains, excites me. I am particularly keen to collaborate with Dr. Cavallaro's team to explore novel prompting strategies and other techniques for refining LLM agent behaviors.
Thank you for considering my application. I look forward to the opportunity to contribute to AlignAI's mission of ensuring Large Language Models remain ethically aligned and socially beneficial.
Sincerely,
Vishal
-----------
Please suggest me what is bad and how it could be improved.
Thank you so much.
We are seeking a highly motivated Doctoral Candidate (DC) to join our research team as part of a prestigious Marie Skłodowska-Curie Actions (MSCA) Doctoral Network (DN) project: AlignAI. The AlignAI Doctoral Network aims to train doctoral candidates in the interdisciplinary Large Language Model (LLM) research field. It focuses on aligning these models with human values to ensure their development and deployment are ethically sound and socially beneficial. By integrating expertise from social sciences, humanities, and technical disciplines, the project will address critical issues such as explainability and fairness, ensuring LLMs contribute positively to education, mental health, and news consumption.
The doctoral candidate will focus on personas and prompt templates for aligned LLM companions, investigating how to identify user personas and on how to craft LLM companions that align to values and preferences. The successful candidate will have the opportunity to work closely with leading experts in the field.
Letter:
-----------------------------------------------
Dear Admissions Committee,
I am Vishal, a postgraduate from India with a strong passion for human-AI collaboration and research experience in Artificial Intelligence, particularly at the intersection of creativity, education, and societal impact. I am excited to apply for the Doctoral Candidate position under Prof. Andrea Cavallaro's group at IDIAP, Switzerland, as part of the AlignAI project.
My fascination with AI began through video games, where non-player characters (NPCs) seemed to react dynamically to user actions. However, as I delved deeper into neural networks and machine learning, I realized these NPCs were far less intelligent than I had imagined. This realization inspired a lifelong goal: to create AI agents or avatars with true, consistent personalities capable of meaningful interactions.
My journey into research began with a rewarding internship under Dr. Kazjon Grace at the Design with AI Lab, University of Sydney. There, I developed Twyptids, a human-AI collaborative sketching tool emphasizing real-time creative engagement. This experience nurtured my interest in AI systems that augment human creativity. Building on this foundation, my master's thesis focused on developing computational tools for deciphering and attributing ancient Tamil inscriptions, blending AI capabilities with social and historical contexts.
My entrepreneurial endeavors also reflect my commitment to impactful agents. I co-founded an ed-tech startup focused on developing serious games for education, leveraging gamification to enhance learning experiences for school children. This effort deepened my understanding of user engagement and psychology, essential for designing aligned LLM companions.
The AlignAI project's interdisciplinary goals align deeply with my aspirations, particularly in ensuring LLM companions remain consistent with user-defined personas across diverse domains such as education, mental health, and online news consumption. I am especially drawn to Prof. Andrea Cavallaro's focus on improving persona adherence in LLM agents, a foundational requirement across all applications. I'm also very much interested in collaborating with teams of Prof. Stephan Wensveen and Prof. Sneha Das on LLM agents for education.
One of the key challenges with LLM agents is their occasional inability to maintain persona consistency. Drawing from the literature, I propose several approaches to address this issue:
1.Narrative-Based Imprinting: This approach involves feeding stories about an agent's origin and motivations back into the system. The paper "Character-LLM: A Trainable Agent for Role-Playing" adopts this method. However, a notable limitation is that the avatars created by Character-LLMs are primarily based on historical figures whose backstories are already well-documented. The authors successfully extrapolated events from the lives of these historical figures and embedded those memories into LLMs. When creating a custom agent, though, it becomes necessary to develop an entirely new backstory. Drawing on my novel-writing experience, I can design a systematic template for characters and generate rich backstories and life events using well-established narrative frameworks, such as Freytag's Pyramid and the Hero's Journey.
2.Psychological Modelling: This approach involves imprinting psychological parameters into an LLM's memory, such as Myers-Briggs Type or OCEAN traits. The paper "PersonaLLM: Investigating the Ability of Large Language Models to Express Big Five Personality Traits" explores this method. Additionally, we could leverage established stereotypes, such as assigning astrological signs, to create distinct personas. Another possibility is delving into psychological studies to establish strict behavioral rules for LLMs, ensuring they remain in character. This method focuses on enabling LLMs to emulate a character rather than truly creating one, as depicted in the movie Ex Machina. My background in sociology, gained during my work on an ed-tech startup, could contribute valuable insights to this process.
3.Mechanistic Interpretability and Pruning: During my research fellowship at BITS Pilani, I conceptualized a method to identify and neutralize parameters responsible for out-of-character actions using mechanistic interpretability. By fine-tuning or pruning these parameters, we can achieve more consistent persona adherence. I can work on this idea along with the other Doctoral Candidate at IDIAP for LLM robustness.
The prospect of contributing to foundational research in persona alignment, applicable across multiple domains, excites me. I am particularly keen to collaborate with Dr. Cavallaro's team to explore novel prompting strategies and other techniques for refining LLM agent behaviors.
Thank you for considering my application. I look forward to the opportunity to contribute to AlignAI's mission of ensuring Large Language Models remain ethically aligned and socially beneficial.
Sincerely,
Vishal
-----------
Please suggest me what is bad and how it could be improved.
Thank you so much.