🔸 F-IE-3: User Survey DataDive

User Survey and Data Collection for AI Code Assistants

Foundational Track - Introductory AI Explorations



Objective

The primary focus of this project is to develop students’ skills in User Survey and Data Collection, crucial for careers in STEM and AI fields. It also introduces UX research fundamentals, emphasizing user interviewing and persona building.

Students will engage with Generative AI-powered code assistants like AWS CodeWhisperer, GitHub Copilot, and Google Duet AI. They will use interviews, surveys, and trend analysis to understand user behaviors and motivations comprehensively.

Leveraging AI tools such as ChatGPT for data analysis, students will transform insights into actionable personas and design strategies. This practical application of AI technologies aims to streamline data processing and improve product development, culminating in the creation of intuitive, user-centered products.


Learning Outcomes

Upon successful completion of this project, students will:

  • Proficiency in Data Collection Techniques: Master diverse data collection methods, including user interviews and surveys, to gather comprehensive insights into trends and user experiences and motivations.

  • Data Transformation Proficiency: Gain proficiency in converting raw data from user interviews into structured, actionable insights. Employ AI technologies to assist in identifying patterns and deriving insights, thereby streamlining the transformation process from data collection to actionable outputs that inform product development and user-centered design.


Steps and Tasks

1. Understanding UX Research and Data Collection

Begin by exploring the fundamentals of UX research, focusing on how it integrates into broader data collection and analysis efforts in product development. Learn about user experience principles and how they help gather actionable insights. Start with the What is UX Research? from the Interaction Design Foundation.

2. Designing the Data Collection Framework

Set clear goals for your user interviews and surveys, emphasizing the collection of data that reveals user interactions with AI-powered tools. Develop a structured interview guide that aligns with these objectives, ensuring questions are targeted to uncover deep insights into user behaviors and needs as well as trends. For guidance on creating effective interview guides, check How to Write an Interview Guide.

3. Conducting Interviews and Surveys

Engage with users through structured interviews and widely distributed surveys to collect diverse data on user experiences with Generative AI code assistants. Use platforms like online forums to find participants. Learn more about conducting effective interviews at How to Conduct User Interviews.

:bulb: Note: If you find that you haven’t collected adequate data, consider joining the Code-Along Discussions. This will allow you to combine your data with that of other participants, enhancing the depth of insights you can generate.

4. Data Analysis and Synthesis

After collecting data, employ AI-assisted tools to transcribe and analyze the information. Use software to identify patterns and themes. This stage is crucial for translating raw data into a format that can be used to make informed decisions about product development.

5. Creating Actionable Personas

From the analyzed data, construct detailed user personas that represent various user segments. These personas should include demographic details, user goals, preferences, and pain points. Utilize these personas to guide the design and development of solutions that meet actual user needs. Refer to User persona examples for templates and inspiration.

6. Developing User Journey Maps

Illustrate the complete user interaction with the AI-powered code assistants through journey maps. Each map should capture key touchpoints, emotional experiences, and potential areas for enhancing user satisfaction. This visual tool helps in understanding and improving the overall user experience. See Journey map examples for how to create effective maps.


Self Evaluation for AI Mentor/Evaluator Conversation

Prepare for your discussions with the AI mentor by reflecting on the following key areas, tailored to enhance your understanding of data collection and analysis within the project:

Evaluation for Virtual-Internships Admissions:

The points provided are designed to guide your conversation for admission into our Virtual-Internships program. Focus on demonstrating your enthusiasm for learning, your ability to tackle challenges, and your understanding of how this project relates to real-world applications. The AI mentor is looking for candidates who show potential, a growth mindset, and a genuine interest in the field. Remember, it’s not just about what you’ve done, but also about what you’ve learned and how you’ve grown through the process.

Evaluation for Skill Certifications on the Talent Discovery Platform:

After completing your Virtual-Internship, you’ll be well-equipped to engage in more in-depth technical discussions. The points under this section are tailored for these advanced conversations, which are part of our Skill Certification process on the Talent Discovery Platform. Here, you’ll demonstrate not just proficiency in the tools and techniques, but a deep understanding of methodological choices, data analysis considerations, and their impact on product development. These discussions are designed to showcase your readiness for high-level industry roles or advanced academic pursuits.

If you already have the required expertise to handle an advanced conversation, feel free to skip the virtual-internships and jump straight into skill certifications!

Remember:

  • For both types of conversations, the AI mentor will adapt to your level of expertise. Be honest about your current knowledge and eager to learn.
  • In both cases, don’t just list what you did; explain why you made certain choices and what you learned from the outcomes.
  • The Virtual-Internship experience is designed to prepare you for the more rigorous Skill Certification evaluations. Embrace the learning process, and you’ll find yourself naturally growing into the depth required for these advanced discussions.
  • Last but definitely not the least, talk about soft skills wherever applicable!

Now, let’s dive into the specific points for each type of evaluation. Feel free to explore these topics in the order that best tells your unique project story:

User Survey and Data Collection for AI Code Assistants

Evaluation for Virtual-Internships Admissions
  • Start with a Brief Project Overview: Begin by summarizing the project objectives and the key activities you engaged in (User Survey, Data Collection, AI Analysis, Persona Building). This sets the context for the discussion.

  • Discuss Data Collection Techniques:

    • Methodology Selection: Reflect on your choice of data collection methods (interviews, surveys, trend analysis) and how effectively these methods aligned with the project goals.
    • Execution and Adaptability: Evaluate the execution of your data collection strategies. Discuss any challenges you faced, such as finding participants or ensuring diverse data, and how you adapted your methods in response to these challenges.
    • Quality of Data Gathered: Consider the quality of the data collected. How did you ensure the reliability and validity of the data? Discuss any improvements you implemented to enhance data quality.
  • Data Analysis Proficiency:

    • Technique Application: Detail the analytical techniques used to process and interpret the collected data. Highlight how you utilized AI tools like ChatGPT for data analysis.
    • Pattern Recognition and Insight Generation: Reflect on your effectiveness in identifying patterns and deriving insights. Were there specific instances where data analysis led to unexpected findings or challenges?
  • Application of Insights:

    • From Data to Design: Discuss how the insights obtained from data analysis were translated into actionable personas and design strategies. Explain how these insights guided your recommendations for improving AI-powered code assistants.
    • Persona Creation: Describe the process of creating user personas from the analyzed data. What key characteristics did you focus on, and how did these personas help in understanding user needs?
  • Learning and Growth:

    • Skill Development: Reflect on how your skills in data collection, analysis, and UX research have improved.
    • Impact on Understanding: Discuss how this project has deepened your understanding of user behavior and its importance in product development.
    • Future Applications: Consider how you might apply these skills in future projects or career endeavors. What additional areas would you like to explore or improve upon?
  • Ask Questions: Show curiosity by asking the AI mentor questions. For example:

    • “How do you see the role of AI-powered code assistants evolving in the next few years? What are the biggest opportunities and challenges in this field?”
    • “What are some of the key metrics used to evaluate the effectiveness of AI code assistants like GitHub Copilot or AWS CodeWhisperer in real-world applications?”
    • “How do you balance the need for user-friendly interfaces with the complexity of AI algorithms in developing these tools?”
Evaluations for Skill Certifications on the Talent Discovery Platform
  • Data Collection Techniques:

    • Methodology Selection: Discuss the methodologies you chose for data collection (interviews, surveys, trend analysis) and their alignment with project goals. Explain why these methods were suitable for understanding user interactions with AI-powered tools.
    • Execution and Adaptability: Evaluate the execution of your data collection strategies. Share any challenges you faced and how you adapted your approach to overcome them, such as refining survey questions or employing new tools for better data accuracy.
    • Quality of Data Gathered: Assess the quality and comprehensiveness of the data collected. Discuss measures taken to ensure data reliability and validity, such as piloting your surveys or using diverse sampling techniques.
  • Data Analysis Proficiency:

    • Technique Application: Explain the analytical techniques and AI tools you used to process and interpret the collected data. Highlight specific tools like ChatGPT, and how they assisted in pattern recognition and insight generation.
    • Pattern Recognition and Insight Generation: Reflect on your ability to identify significant patterns and derive actionable insights from the data. Share any unexpected findings or challenges encountered during the analysis process.
  • Application of Insights:

    • From Data to Design: Discuss how you transformed raw data into actionable insights and design strategies. Explain how these insights informed the development of user personas and contributed to improving AI-powered code assistants.
    • Persona Creation and Usage: Describe the process of creating detailed user personas from the analyzed data. Highlight key characteristics and how these personas were used to guide design and development decisions.
  • Developing User Journey Maps:

    • Journey Map Creation: Explain how you developed user journey maps based on the collected data and user personas. Describe the key touchpoints, emotional experiences, and pain points captured in the journey maps.
    • Impact on User Experience: Discuss how these journey maps helped in understanding and improving the overall user experience with AI-powered code assistants. Share specific examples of how journey maps informed design improvements.
  • AI Code Assistant Focus:

    • User Behavior and Interaction: Reflect on the behaviors and interaction patterns you observed from users while using AI code assistants like GitHub Copilot, AWS CodeWhisperer, or Google Duet AI. What were the common challenges or advantages users reported?
    • Feature Effectiveness: Discuss the features of these AI code assistants that were most appreciated by users and those that needed improvement. How did these insights influence your design recommendations?
    • Comparative Analysis: Compare user feedback across different AI code assistants. What unique strengths or weaknesses did each tool exhibit according to the users? How did this comparison help in forming your overall understanding of AI code assistant effectiveness?
  • Learning and Growth:

    • Skill Development: Reflect on how your skills in data collection, analysis, and UX research have grown through this project. Discuss any new techniques or tools you learned and applied.
    • Impact on Understanding: Consider how this project has deepened your understanding of user behavior, UX research, and its importance in product development.
    • Future Applications: Discuss how you plan to apply these skills in future projects or career endeavors. Identify additional areas for exploration or improvement based on your project experience.
  • Ask Questions: Show curiosity by asking the AI mentor questions. For example:

    • “In what ways do you think AI code assistants can be improved to better support beginner programmers versus experienced developers?”
    • “How do leading companies measure the ROI of integrating AI code assistants into their development workflows?”
    • “What ethical considerations should be taken into account when developing AI-powered tools that assist with coding and software development?”

We recommend covering 3-5 different areas of the project. Remember, the goal is not just to showcase your technical skills but also to demonstrate your ability to think critically about the application, challenges, and real-world implications of your work. The AI chatbot will likely engage more deeply if you present a well-rounded perspective that goes beyond just coding.


Resources and Learning Materials

  • What is UX?
    An article by the Interaction Design Foundation that explains the basics of user experience research and its role in uncovering problems and design opportunities.

  • UX Research Methods by the Nielsen Norman Group.
    A comprehensive guide on UX research methods from the Nielsen Norman Group, a globally recognized UX research consulting firm.

  • Secondary Research
    An insightful article from dscout’s People Nerds blog that emphasizes the importance of secondary research in forming a foundational understanding of the research domain before moving on to primary, generative research.

  • How to Write an Interview Guide
    A detailed resource from the Nielsen Norman Group on crafting effective interview guides for user interviews.

  • Generative Research Guide
    A complete guide from dscout’s People Nerds blog that explains the concept of generative research, its importance, and how to conduct it.

  • Primary vs Secondary Research
    An article from Guide2Research that contrasts primary and secondary research, detailing when and why each type is used.

  • How to Design a Survey
    A guide from Pew Research Center that provides best practices and methodologies for designing effective surveys.

  • How to Conduct User Interviews
    Another valuable resource from the Nielsen Norman Group that shares effective strategies and techniques for conducting user interviews.

  • Affinity Mapping
    A resource by the Nielsen Norman Group that explains what affinity mapping is, and how it can be used to organize and analyze complex data in UX research.


This project aims to set the groundwork for students’ final internship by equipping them with a solid foundation in user interviewing and persona building, and their crucial role in the design and enhancement of a user-centered AI-powered code assistant.


STEM-Away® Recordings