Projects

Di-Fic: Enhancing Digital Citizenship education through Audiovisual Fiction?
Di-Fic: Enhancing Digital Citizenship education through Audiovisual Fiction?

Digital citizenship, defined by the Council of Europe as the capacity to participate responsibly in communities through competent and positive engagement with digital technologies, is becoming an increasingly pressing societal issue as our lives continue to shift online. Two-thirds of citizens expressed a desire for more education and training to enhance their insufficient digital competencies. However, current approaches to digital citizenship education exhibit limitations, particularly regarding the scope of the content conveyed, its operationalization for skills acquisition and its ignorance of the pre-established representations of digital technologies. Given these challenges, one promising avenue resides in the use of audiovisual fiction as a vector for fostering digital citizenship. Indeed, recent studies indicate a clear connection between the consumption of fiction (e.g., science fiction movies) and the digital citizenship aspects beyond coding. Furthermore, the use of fiction in classrooms for other purposes than digital citizenship education has a long tradition of established and operationalized practices, such as design fiction with clear evaluation instruments. Lastly, students arrive in the classroom with already pre-established, and possibly skewed through fictional tropes, representations of digital technologies that need to be accounted for.

MuLLSA: Mutation with LLM and Static Analysis
MuLLSA: Mutation with LLM and Static Analysis

Looking after bugs and vulnerabilities is one of the most important tasks in computer science, especially in the context of web applications. There are many techniques to detect and prevent these issues, one of the most widely used being mutation testing. However, creating mutants manually is a time-consuming and error-prone pro- cess. To address this, we perform a combination of static analysis and an LLM to automatically generate mutants. In this study, we compare the performance of an LLM in producing mutants based on three different static analysis tools: KAVe, WAP, and the LLM itself. Our results show significant variability between tools. Mutants produced using traditional static analysers vary heavily depending on the type of vulnerability, and tend to perform better when tools are combined. When it comes to the LLM, the quality of mutants is more consistent across different vulnerabilities, and the overall code coverage is significantly higher than traditional approaches. On the other hand, LLM-generated mutants have a higher success rate in passing initial verification, but often contain syntactic or semantic errors in the code. These findings suggest that LLMs are a promising addition to automated vulnerability testing workflows, especially when used in conjunction with static analysis tools. However, further refinement is needed to reduce the generation of incorrect or invalid code and to better align with real-world exploitability.