GAIT workshop addresses the requirement of instilling trust and security in the rapidly evolving domain of generative AI and large language models (LLMs). In the age of advanced artificial intelligence, the development and deployment of generative models raise critical questions about trustworthiness, security, and ethical use. GAIT workshop will explore the forefront of research and practice in creating LLMs that are secure, reliable, and aligned with ethical standards. We invite studies discussing strategies for enhancing transparency, accountability, and fairness in AI systems. Participants will engage with topics such as data privacy, bias detection and mitigation, and the secure implementation of AI technologies in sensitive environments. GAIT workshop is intended for researchers and technologists dedicated to advancing the field of generative AI while ensuring the technology is developed and utilized in a manner that earns the trust and ensures the security of all stakeholders.
The list of topics includes, but is not limited to:
Authors are invited to submit original unpublished research papers as well as industrial practice papers. Simultaneous submissions to other conferences are not permitted. Detailed instructions for electronic paper submission, panel proposals, and review process can be found at QRS submission.
Each submission can have a maximum of ten pages. It should include a title, the name and affiliation of each author, a 300-word abstract, and up to 6 keywords. Shorter version papers (up to six pages) are also allowed.
All papers must conform to the QRS conference proceedings format (PDF | Word DOCX | Latex) and Submission Guideline set in advance by QRS 2025. At least one of the authors of each accepted paper is required to pay the full registration fee and present the paper at the workshop. Submissions must be in PDF format and uploaded to the conference submission site. Arrangements are being made to publish extended version of top-quality papers in selected SCI journals.
Submission