Dr. Sunjung Kim Thao | Vision and Language | Best Innovation Award

Dr. Sunjung Kim Thao | Vision and Language | Best Innovation Award

Doctorate at University of Central Arkansas, United States

👨‍🎓 Profiles

Scopus

Education

  • Ph.D. in Speech Language Hearing Sciences, University of Florida (2012)
  • M.A. in Speech Language Pathology, Ewha Women’s University, Korea (2005)
  • B.A. in Psychology, Ewha Women’s University, Korea (2003)

🏥 Clinical Experience

Speech-Language Pathologist at Kim Wha Soo Speech-Language Clinic, Seoul, Korea (2005-2007)

🔬 Research Interests

Dr. Thao’s research focuses on enhancing language and literacy skills in children, particularly those with reading difficulties. Her studies promote inclusive education and community engagement through innovative service-learning projects.

🏆 Honors & Awards

  • Emerging Dyslexia Researcher Award, International Dyslexia Association (2017)
  • Graduate Research Fellowship, University of Florida (2011-2012)
  • Certificate of Achievement for Outstanding Academic Accomplishment, University of Florida (2010)

🌐 Professional Affiliations

  • Licensed Speech-Language Pathologist, Arkansas
  • Certification of Clinical Competence in Speech Language Pathology, ASHA
  • Member of the Asia Pacific Society of Speech, Language and Hearing (2022-Present)

 

Publications

Multimodal learning: How task types affect learning of students with reading difficulties

  • Authors: Thao, S.K., Lombardino, L.J., Tibi, S., Gleghorn, A.
  • Journal: Clinical Archives of Communication Disorders
  • Year: 2022

Indexing effects of phonological representational strength on rapid naming using rime neighborhood density

  • Authors: Wiseheart, R., Kim, S., Lombardino, L.J., Altmann, L.J.P.
  • Journal: Applied Psycholinguistics
  • Year: 2019

Multimedia learning: Contributions of learners’ verbal abilities and presentation modalities

  • Authors: Kim, S., Lombardino, L.J.
  • Journal: International Journal of Learning, Teaching and Educational Research
  • Year: 2019

Exploring Text and Icon Graph Interpretation in Students with Dyslexia: An Eye-tracking Study

  • Authors: Kim, S., Wiseheart, R.
  • Journal: Dyslexia
  • Year: 2017

Simple sentence reading and specific cognitive functions in college students with dyslexia: An eye-tracking study

  • Authors: Kim, S., Lombardino, L.J.
  • Journal: Clinical Archives of Communication Disorders
  • Year: 2016

Vision and Language

Introduction of Vision and Language

Vision and Language research is a multidisciplinary field that explores the intersection of computer vision and natural language processing (NLP). It focuses on developing AI systems that can understand, interpret, and generate both visual and textual information. This area of study is vital for bridging the gap between visual perception and human-like language understanding, opening doors to applications such as image captioning, visual question answering, and content recommendation.

Subtopics in Vision and Language:

  1. Image Captioning: Researchers work on models that generate descriptive text for images, allowing machines to explain visual content in natural language. This subfield explores techniques to improve the quality and coherence of generated captions.
  2. Visual Question Answering (VQA): VQA models enable machines to answer questions about images. Research focuses on enhancing the reasoning capabilities of these models to provide accurate and context-aware answers.
  3. Visual Dialog: Visual dialog systems extend VQA to engage in multi-turn conversations about images. Research in this subtopic aims to improve the depth and coherence of dialog interactions between humans and machines.
  4. Cross-Modal Retrieval: This area explores techniques for retrieving images or text based on queries from the other modality. For example, retrieving images based on textual descriptions or finding relevant textual information from images.
  5. Visual Commonsense Reasoning: Developing models capable of understanding and reasoning about common-sense knowledge in images, such as inferring actions, events, or relationships depicted in visual scenes.
  6. Visual Storytelling: Research focuses on generating coherent narratives or stories based on sequences of images, merging visual and textual storytelling for applications in multimedia content creation and entertainment.
  7. Multimodal Machine Translation: Investigating techniques to translate between languages while considering both textual and visual input, enabling more accurate and context-aware translations in cross-lingual scenarios.
  8. Visual Sentiment Analysis: The analysis of emotions and sentiments conveyed in visual content, helping systems understand the emotional context of images and videos for applications in social media analysis and mental health monitoring.
  9. Visual Explanation and Reasoning: Developing models that can provide explanations for their visual predictions, allowing users to understand how AI systems arrive at their conclusions, crucial for trust and transparency.
  10. Accessibility and Assistive Technology: Research in creating AI systems that assist individuals with visual impairments by providing detailed descriptions of visual scenes and objects, enabling greater accessibility to visual content.

Vision and Language research holds great promise in creating more intuitive and capable AI systems that can understand and communicate about the visual world in a way that mirrors human comprehension. These subtopics reflect the ongoing efforts to advance the integration of vision and language understanding in artificial intelligence.

[post_grid id="19552"]