AIware 2024
Mon 15 - Tue 16 July 2024 Porto de Galinhas, Brazil, Brazil
co-located with FSE 2024
Dates
Tracks
Plenary

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 15 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 10:30
Opening + Keynote1 + AIware VisionMain Track at Baobá 1
09:00
45m
Talk
Automatic Programming vs. Artificial Intelligence
Main Track
James Noble Independent. Wellington, NZ
DOI
09:45
45m
Talk
Towards AI for Software Systems
Main Track
Nafise Eskandani ABB Corporate Research Center, Guido Salvaneschi University of St. Gallen
DOI
10:30 - 11:00
Coffee BreakFSE Social Events at Baobá 3
10:30
30m
Coffee break
Break
FSE Social Events

11:00 - 12:30
Industry Talk1 + SE for AIwareMain Track at Baobá 1
11:00
45m
Talk
Function+Data Flow: A Framework to Specify Machine Learning Pipelines for Digital Twinning
Main Track
Eduardo de Conto Nanyang Technological University; CNRS@CREATE, Blaise Genest IPAL - CNRS - CNRS@CREATE, Arvind Easwaran Nanyang Technological University
DOI
11:45
45m
Talk
Green AI in Action: Strategic Model Selection for Ensembles in Production
Main Track
Nienke Nijkamp Delft University of Technology, June Sallou Delft University of Technology, Niels van der Heijden University of Amsterdam, Luís Cruz Delft University of Technology
DOI
14:00 - 15:30
Industry Talk2 + Human AI ConversationMain Track at Baobá 1
14:00
22m
Talk
Unveiling Assumptions: Exploring the Decisions of AI Chatbots and Human Testers
Main Track
Francisco Gomes de Oliveira Neto Chalmers | University of Gothenburg
DOI
14:22
22m
Talk
RUBICON: Rubric-Based Evaluation of Domain-Specific Human AI Conversations
Main Track
Param Biyani Microsoft, Yasharth Bajpai Microsoft, Arjun Radhakrishna Microsoft, Gustavo Soares Microsoft, Sumit Gulwani Microsoft
DOI
14:45
22m
Talk
From Human-to-Human to Human-to-Bot Conversations in Software Engineering
Main Track
Ranim Khojah Chalmers | University of Gothenburg, Francisco Gomes de Oliveira Neto Chalmers | University of Gothenburg, Philipp Leitner Chalmers | University of Gothenburg
DOI
15:07
22m
Talk
Unveiling the Potential of a Conversational Agent in Developer Support: Insights from Mozilla’s PDF.js Project
Main Track
João Correia PUC-Rio, Morgan C. Nicholson University of São Paulo, Daniel Coutinho Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Caio Barbosa Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Marco Castelluccio Mozilla, Marco Gerosa Northern Arizona University, Alessandro Garcia Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Igor Steinmacher Northern Arizona University
DOI Pre-print
16:00 - 18:00
Security and Safety + Round Table + Day1 ClosingMain Track at Baobá 1
16:00
40m
Talk
A Case Study of LLM for Automated Vulnerability Repair: Assessing Impact of Reasoning and Patch Validation Feedback
Main Track
Ummay Kulsum North Carolina State University, Haotian Zhu Singapore Management University, Bowen Xu North Carolina State University, Marcelo d'Amorim North Carolina State University
DOI
16:40
40m
Talk
An AI System Evaluation Framework for Advancing AI Safety: Terminology, Taxonomy, Lifecycle Mapping
Main Track
Boming Xia CSIRO's Data61 & University of New South Wales, Qinghua Lu Data61, CSIRO, Liming Zhu CSIRO’s Data61, Zhenchang Xing CSIRO's Data61
DOI
17:20
40m
Talk
Measuring Impacts of Poisoning on Model Parameters and Embeddings for Large Language Models of Code
Main Track
Aftab Hussain University of Houston, Md Rafiqul Islam Rabin University of Houston, Amin Alipour University of Houston
DOI

Tue 16 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 10:30
Opening Day2 + Keynote2 + AIware for Domain-specific ApplicationsMain Track at Baobá 1
09:00
30m
Talk
Neuro-Symbolic Approach to Certified Scientific Software Synthesis
Main Track
Hamid Bagheri University of Nebraska-Lincoln, Mehdi Mirakhorli Rochester Institute of Technology, Mohamad Fazelnia University of Hawaii at Manoa, Ibrahim Mujhid University of Hawaii at Manoa, Md Rashedul Hasan University of Nebraska-Lincoln
DOI
09:30
30m
Talk
SolMover: Smart Contract Code Translation Based on Concepts
Main Track
Rabimba Karanjai University of Houston, Lei Xu Kent State University, Weidong Shi University of Houston
DOI
10:00
30m
Talk
The Art of Programming: Challenges in Generating Code for Creative Applications
Main Track
Michael Cook King’s College London
DOI
11:00 - 12:30
Industry Talk3 + AIware for CodeMain Track at Baobá 1
11:00
22m
Talk
A Transformer-Based Approach for Smart Invocation of Automatic Code Completion
Main Track
Aral de Moor Delft University of Technology, Arie van Deursen Delft University of Technology, Maliheh Izadi Delft University of Technology
DOI
11:22
22m
Talk
Chain of Targeted Verification Questions to Improve the Reliability of Code Generated by LLMs
Main Track
Sylvain Kouemo Ngassom Polytechnique Montréal, Arghavan Moradi Dakhel Polytechnique Montreal, Florian Tambon Polytechnique Montréal, Foutse Khomh Polytechnique Montréal
DOI
11:45
22m
Talk
Identifying the Factors That Influence Trust in AI Code Completion
Main Track
Adam Brown Google, Sarah D'Angelo Google, Ambar Murillo Google, Ciera Jaspan Google, Collin Green Google
DOI
12:07
22m
Talk
Leveraging Machine Learning for Optimal Object-Relational Database Mapping in Software Systems
Main Track
Sasan Azizian University of Nebraska-Lincoln, Elham Rastegari Creighton University, Hamid Bagheri University of Nebraska-Lincoln
DOI
14:00 - 15:30
Industry Talk4 + AIware for Software Lifecycle ActivitiesMain Track at Baobá 1
14:00
22m
Talk
A Comparative Analysis of Large Language Models for Code Documentation Generation
Main Track
Shubhang Shekhar Dvivedi IIIT Delhi, Vyshnav Vijay IIIT Delhi, Sai Leela Rahul Pujari IIIT Delhi, Shoumik Lodh IIIT Delhi, Dhruv Kumar Indraprastha Institute of Information Technology, Delhi
DOI
14:22
22m
Talk
AI-Assisted Assessment of Coding Practices in Modern Code Review
Main Track
Manushree Vijayvergiya Google, Malgorzata Salawa Google, Ivan Budiselic Google, Dan Zheng Google DeepMind, Pascal Lamblin Google, Marko Ivanković Google; Universität Passau, Juanjo Carin Google, Mateusz Lewko Google Inc, Jovan Andonov Google, Goran Petrović Google Inc, Danny Tarlow Google, Petros Maniatis Google DeepMind, René Just University of Washington
DOI
14:45
22m
Talk
Effectiveness of ChatGPT for Static Analysis: How Far Are We?
Main Track
Mohammad Mahdi Mohajer York University, Reem Aleithan York University, Canada, Nima Shiri Harzevili York University, Moshi Wei York University, Alvine Boaye Belle York University, Hung Viet Pham York University, Song Wang York University
DOI
15:07
22m
Talk
The Role of Generative AI in Software Development Productivity: A Pilot Case Study
Main Track
Mariana Coutinho CESAR School, Lorena Marques CESAR School, Anderson Santos CESAR School, Marcio Dahia CESAR School, Cesar França CESAR School, Ronnie de Souza Santos University of Calgary
DOI
16:00 - 18:00
Industry Talk5 + AIware challenge + Day2 ClosingChallenge Track at Baobá 1
16:00
40m
Talk
Automated Scheduling for Thematic Coherence in Conferences
Challenge Track
Mahzabeen Emu Queen’s University, Tasnim Ahmed Queen’s University, Salimur Choudhury Queen’s University
DOI
16:40
40m
Talk
Conference Program Scheduling using Genetic Algorithms
Challenge Track
Rucha Deshpande Purdue University, USA, Aishwarya Devi Akila Pandian Purdue University, Vigneshwaran Dharmalingam Purdue University
DOI
17:20
40m
Talk
Investigating the Potential of Using Large Language Models for Scheduling
Challenge Track
Deddy Jobson Mercari, Li Yilin Mercari
DOI

Accepted Papers

Title
A Case Study of LLM for Automated Vulnerability Repair: Assessing Impact of Reasoning and Patch Validation Feedback
Main Track
DOI
A Comparative Analysis of Large Language Models for Code Documentation Generation
Main Track
DOI
AI-Assisted Assessment of Coding Practices in Modern Code Review
Main Track
DOI
An AI System Evaluation Framework for Advancing AI Safety: Terminology, Taxonomy, Lifecycle Mapping
Main Track
DOI
A Transformer-Based Approach for Smart Invocation of Automatic Code Completion
Main Track
DOI
Automatic Programming vs. Artificial Intelligence
Main Track
DOI
Chain of Targeted Verification Questions to Improve the Reliability of Code Generated by LLMs
Main Track
DOI
Effectiveness of ChatGPT for Static Analysis: How Far Are We?
Main Track
DOI
From Human-to-Human to Human-to-Bot Conversations in Software Engineering
Main Track
DOI
Function+Data Flow: A Framework to Specify Machine Learning Pipelines for Digital Twinning
Main Track
DOI
Green AI in Action: Strategic Model Selection for Ensembles in Production
Main Track
DOI
Identifying the Factors That Influence Trust in AI Code Completion
Main Track
DOI
Leveraging Machine Learning for Optimal Object-Relational Database Mapping in Software Systems
Main Track
DOI
Measuring Impacts of Poisoning on Model Parameters and Embeddings for Large Language Models of Code
Main Track
DOI
Neuro-Symbolic Approach to Certified Scientific Software Synthesis
Main Track
DOI
RUBICON: Rubric-Based Evaluation of Domain-Specific Human AI Conversations
Main Track
DOI
SolMover: Smart Contract Code Translation Based on Concepts
Main Track
DOI
The Art of Programming: Challenges in Generating Code for Creative Applications
Main Track
DOI
The Role of Generative AI in Software Development Productivity: A Pilot Case Study
Main Track
DOI
Towards AI for Software Systems
Main Track
DOI
Unveiling Assumptions: Exploring the Decisions of AI Chatbots and Human Testers
Main Track
DOI
Unveiling the Potential of a Conversational Agent in Developer Support: Insights from Mozilla’s PDF.js Project
Main Track
DOI Pre-print

Call for Papers

“Software for all and by all” is the future of humanity. AIware, i.e., AI-powered software, has the potential to democratize software creation. We must reimagine software and software engineering (SE), enabling individuals of all backgrounds to participate in its creation with higher reliability and quality. Over the past decade, software has evolved from human-driven Codeware to the first generation of AIware, known as Neuralware, developed by AI experts. Foundation Models (FMs, including Large Language Models or LLMs), like GPT, ushered in software’s next generation, Promptware, led by domain and prompt experts. However, this merely scratches the surface of the future of software. We are already witnessing the emergence of the next generation of software, Agentware, in which humans and intelligent agents jointly lead the creation of software. With the advent of brain-like World Models and brain-computer interfaces, we anticipate the arrival of Mindware, representing another generation of software. Agentware and Mindware promise greater autonomy and widespread accessibility, with non-expert individuals, known as Software Makers, offering oversight to autonomous agents.

The software engineering community will need to develop fundamentally new approaches and evolve existing ones, so they are suitable for a world in which software creation is within the reach of Software Makers of all levels of SE expertise, as opposed to solely expert developers. We must recognize a shift in where expertise lies in software creation and start making the needed changes in the type of research that is being conducted, the ways that SE is being taught, and the support that is offered to software makers.

The 1st ACM International Conference on AI-powered Software (AIware 2024, https://2024.aiwareconf.org/) will be hosted on July 15th-16th, 2024, at Porto de Galinhas, Brazil, co-located with FSE’24. AIware 2024 aims to bring the software engineering community together in anticipation of the upcoming changes driven by FMs and look at them from the perspective of AI-powered software and their evolution. AIware 2024 promotes cross-disciplinary discussions, identifies emerging research challenges, and establishes a new research agenda for the community in the Foundation Model era.

Topics of interest

Topics of interest of AIware conference include but are not limited to the following:

  • How would future software look like in the FM era?
  • How to integrate legacy software in future AIware?
  • Do existing programming models (e.g., object-oriented or functional programming) and SE practices (e.g., test-driven development and agile) remain suitable for developing and maintaining software in the FM era?
  • What roles do autonomous agents play in the development and maintenance of software in the FM era?
  • How will inner and open source collaboration evolve in the FM era?
  • What kind of release engineering practices do we need for FM-powered software applications? Are LLMOps comprehensive enough to capture the release engineering needs of AIware in the FM era?
  • How do we debug and monitor AIware in the FM era?
  • How should we change SE curriculum, training and mentoring in the FM era?
  • How to evolve FMs from the perspective of AIware and its makers in the FM era?
  • How do human interactions and perceptions shape the development and implementation of AIware in the FM era?
  • How do we measure and improve the trustworthiness of AIware in the FM era?
  • What are the implications and effectiveness of foundation models in improving software engineering practices and outcomes?
  • How does AIware impact developer productivity?

Types of submissions

AIware 2024 Main Track welcomes submissions from both academia and industry. At least one author from each accepted submission will be required to attend the conference and present the paper. Submissions can include but are not limited to: case studies, vision papers, literature surveys, position papers, theoretical, and applied research papers.

Page limits:

  • Case studies, literature surveys, theoretical, applied research papers: 6 - 8 pages;
  • Vision papers, position papers: 2 - 4 pages;

With an additional 1-2 pages of reference. The page limit is strict.

Submission guidelines

All authors should use the official “ACM Primary Article Template”, as can be obtained from the ACM Proceedings Template page. LaTeX users should use the following latex code at the start of the LaTeX document where the review option produces line numbers for easy reference by the reviewers and the anonymous optician omits author names:

\documentclass[sigconf,review,anonymous]{acmart}

Papers must be submitted electronically through the following submission site: https://aiware24.hotcrp.com/.

All submissions must be in PDF. All papers must be written in English.

All submissions are subject to ACM policies including ACM Publications Policies, ACM’s new Publications Policy on Research Involving Human Participants and Subjects, ACM Policy and Procedures on Plagiarism, ACM Policy on Prior Publication and Simultaneous Submissions, and the ACM Policy on Authorship and its accompanying FAQ released April 20, 2023. In particular, authors should pay attention to the following points:

  • Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was used to generate sections of this Work, including text, tables, graphs, code, data, citations, etc. If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.
  • If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.

Review and evaluation process

A double-anonymous review process will be employed for submissions to the main track. The submission must not reveal the identity of the authors in any way. Papers that violate the double-anonymous requirement will be desk-rejected. For more details on the double-anonymous process, please refer to FSE’s double-anonymous review process.

All submissions will be desk-checked to make sure that they are within the scope of the conference and have satisfied the submission requirements (e.g., page limits and anonymity). Three members of the Program Committee will then be assigned for each submission for the review process. The Program Committee members can bid on submissions to review. The Program Committee will discuss the review results virtually and decide on the accepted submissions. The accepted submissions will be published in the ACM digital library.

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Awards

The best full-length papers accepted in the main track of AIware will be recognized with an ACM SIGSOFT Distinguished Paper Awards.

In addition, selected AIware papers will be invited to be revised and extended for consideration in a special issue of the Empirical Software Engineering journal by Springer.

Important dates

All dates are 23:59:59 AoE (UTC-12h).

  • Intent submission (optional): Mar 22, 2024
  • Papers submission: Mar 29th, 2024
  • Papers notification: April 26th, 2024
  • Papers camera-ready: May 17th, 2024
  • Conference dates: July 15th-16th, 2024

Notes

We are aware that the event dates for AIware and 2030 Software Engineering are conflicting. The organizers of the two events are coordinating the events’ programs, such that authors will have the opportunity to benefit from and participate in both. While a given paper can only be submitted to one of these events, the organizers, at their discretion, will align both events’ programs in order to allow cross-pollination between both communities.