[Paper] Mixed Choice in Asynchronous Multiparty Session Types
We present a multiparty session type (MST) framework with asynchronous mixed choice (MC). We propose a core construct for MC that allows transient inconsistenci...
We present a multiparty session type (MST) framework with asynchronous mixed choice (MC). We propose a core construct for MC that allows transient inconsistenci...
Microservice architectures are an emergent technology that builds business logic into a suite of small services. Each microservice runs in its process and the c...
AI coding agents allow software developers to generate code quickly, which raises a practical question for project managers and open source maintainers: can vib...
Software engineering agents (SWE) are improving rapidly, with recent gains largely driven by reinforcement learning (RL). However, RL training is constrained by...
Machine unlearning for large language models often faces a privacy dilemma in which strict constraints prohibit sharing either the server's parameters or the cl...
Modern cloud servers routinely co-locate multiple latency-sensitive microservice instances to improve resource efficiency. However, the diversity of microservic...
PoCo is a technique that aims to enhance modern coverage-based seed selection (CSS) techniques (such as afl-cmin) by gradually removing obstacle conditional sta...
LLM-powered Multi-Agent Systems (MAS) have demonstrated remarkable capabilities in complex domains but suffer from inherent fragility and opaque failure mechani...
With the increasing importance of distributed scientific workflows, there is a critical need to ensure Quality of Service (QoS) constraints, such as minimizing ...
For every real number c geq 1 and for all varepsilon > 0, there is a fitness function f : {0,1}^n to mathbb{R} for which the optimal mutation rate for the (1...
Large-scale Graph Neural Networks (GNNs) are typically trained by sampling a vertex's neighbors to a fixed distance. Because large input graphs are distributed,...
Federated Learning (FL) enables a group of clients to collaboratively train a model without sharing individual data, but its performance drops when client data ...