Filter results

47689 results

Protective Optimization Technologies: a proposal for contestation in the world rather than fairness in the algorithm

Protective Optimization Technologies: a proposal for contestation in the world rather than fairness in the algorithm 20 September 2021 16:00 - By: Seda Gürses This Responsible Use of Data Seminar is organised by The Academic Fringe Festival (TAFF). Abstract: Fairness frameworks proposed by computer scientists have come into vogue as a way to address the economic, moral, social, and political impact that digital systems have on populations. These frameworks succeed, in part, by narrowing the problem definition to reduce complexity. Not surprisingly, this simplification limits the ability of these frameworks to capture and mitigate a variety of harms caused by AI based optimization systems. In this talk, Seda will first characterize these limitations and evaluate their consequences using concepts from requirements engineering and from social sciences. In particular, she will show that the focus on the inputs and outputs of algorithms misses the way that harms are manifested when systems interact with the "world"; the focus on bias and discrimination excludes broader harms on populations and their environments due to the introduction of optimization systems ; and, most strikingly, the frameworks' reliance on the service provider focuses on mitigations possible through an incentivized service provider and does not explore avenues of action in cases where they are not cooperative or intentionally adversarial. To broaden the scope of the field we propose a new class of solutions that explore other approaches to capturing harms and contesting optimization systems: Protective Optimization Technologies (POTs). POTs take into account the negative impacts of systems in the world and provide means to influence the systems' outputs to mitigate these harms. POTs intervene from outside the system, and are intended to function when the service provider is not cooperative or when they are not able to correct the harms that their system imposes on populations and their environments. Want to join this talk?To receive the Zoom link to join the talk, join the mailing list .

Human-Machine Partnerships in the Future of Work: Exploring the Role of Emerging Technologies in Future Workplaces

Human-Machine Partnerships in the Future of Work: Exploring the Role of Emerging Technologies in Future Workplaces 23 October 2021 16:00 - Location: Online workshop Overview In this workshop ‘Human-Machine Partnerships in the Future of Work: Exploring the Role of Emerging Technologies in Future Workplaces’ we seek to unpack the meaning of human-machine partnerships (HMP) by highlighting that how we define HMP will shape how we design technologies in/for the future of work. We discuss social and design implications in various professional and organizational settings and explore how we can broaden and redefine HMP. Encouraging interdisciplinary perspectives, we aim to develop a taxonomy of HMP by which we can expand our relationship with embodied AI agents and also evaluate and reconsider existing theories, methodologies, and epistemologies in HMP research. Program The program (provisional) includes a mix of talks from selected and invited speakers, and hands-on group activities where we will explore the use of speculative design to envision future collaborations between humans and AI/robots. Invited speakers: David Abbink, Delft University of Technology. Janet Vertesi, Princeton University. Matthew I. Beane, University of California, Santa Barbara. Bilge Mutlu, University of Wisconsin-Madison. More information https://sites.google.com/view/cscw2021workshop/home?authuser=1

Half Height Horizontal

New LDE trainee in D&I office

Keehan Akbari has started since the beginning of September as a new LDE trainee in the Diversity and Inclusion office. What motivated him to work for the D&I office, what does he expect to achieve during this traineeship? Read the short interview below! What motivated you to pursue your LDE traineeship in Diversity and Inclusion office of the TU Delft? I completed both bachelor's and master's degrees in Cultural Anthropology and Development Sociology at Leiden University. Within these studies, my main area of interest was in themes of inclusion and diversity. After being hired as a trainee for the LDE traineeship, and discovering that one of the possible assignments belonged to the Diversity and Inclusion office, my choice was quickly made. I saw this as an excellent opportunity to put the theories I learned during my studies into practice. What specific skills or experiences do you bring to the D&I office that will help promote inclusivity on campus? I am someone who likes to connect rather than polarize, taking into account the importance of different perspectives and stakeholders. I believe that this is how one can achieve the most in fostering diversity and inclusion. You need to get multiple parties on board to get the best results. What are your main goals as you begin your role here, and how do you hope to make an impact? An important goal for me this year is to get students more involved in diversity and inclusion at the university. One way I will try to accomplish this is by contributing to the creation of D&I student teams. By establishing a D&I student team for faculties, it will be possible to deal with diversity- and inclusion-related issues that apply and relate to the specific department. How do you plan to engage with different (student) communities within the university? Since I am new to TU Delft, the first thing I need to do is expand my network here. Therefore, I am currently busy exploring the university and getting to know various stakeholders. Moreover, I intend to be in close contact with various student and study organizations to explore together how to strengthen cooperation on diversity and inclusion. Welcome to the team Keehan and we wish you lots of success with your traineeship!

Researchers from TU Delft and Cambridge University collaborate on innovative methods to combat Climate Change

For over a year and a half, researchers from TU Delft and the Cambridge University Centre for Climate Repair have worked together on groundbreaking techniques to increase the reflectivity of clouds in the fight against global warming. During a two-day meeting, the teams are discussing their progress. Researchers at Cambridge are focusing on the technical development of a system that can spray seawater, releasing tiny salt crystals into the atmosphere to brighten the clouds. The team from TU Delft, led by Prof. Dr. Ir. Herman Russchenberg, scientific director of the TU Delft Climate Action Program and professor of Atmospheric Remote Sensing, is studying the physical effects of this technique. Prof. Russchenberg emphasizes the importance of this research: "We have now taken the first steps towards developing emergency measures against climate change. If it proves necessary, we must be prepared to implement these techniques. Ideally, we wouldn't need to use them, but it's important to investigate how they work now." Prof. Dr. Ir. Stefan Aarninkhof, dean of the Faculty of Civil Engineering and Geosciences, expresses pride in the team as the first results of this unique collaboration are becoming visible. If the researchers in Delft and Cambridge can demonstrate the potential of the concept, the first small-scale experiments will responsibly begin within a year. This research has been made possible thanks to the long-term support from the Refreeze the Arctic Foundation, founded by family of TU Delft alumnus Marc Salzer Levi . Such generous contributions enable innovative and high-impact research that addresses urgent global challenges like climate change. Large donations like these enable the pursuit of innovative, high-impact research that may not otherwise be feasible, demonstrating how our collective effort and investment in science can lead to real, transformative solutions for global challenges like climate change. Climate-Action Programme

How system safety can make Machine Learning systems safer in the public sector

Machine Learning (ML), a form of AI where patterns are discovered in large amounts of data, can be very useful. It is increasingly used, for example, in chatbot Chat GPT, facial recognition, or speech software. However, there are also concerns about the use of ML systems in the public sector. How do you prevent the system from, for example, discriminating or making large-scale mistakes with negative effects on citizens? Scientists at TU Delft, including Jeroen Delfos, investigated how lessons from system safety can contribute to making ML systems safer in the public sector. “Policymakers are busy devising measures to counter the negative effects of ML. Our research shows that they can rely much more on existing concepts and theories that have already proven their value in other sectors,” says Jeroen Delfos. Jeroen Delfos Learning from other sectors In their research, the scientists used concepts from system safety and systems theory to describe the challenges of using ML systems in the public sector. Delfos: “Concepts and tools from the system safety literature are already widely used to support safety in sectors such as aviation, for example by analysing accidents with system safety methods. However, this is not yet common practice in the field of AI and ML. By applying a system-theoretical perspective, we view safety not only as a result of how the technology works, but as the result of a complex set of technical, social, and organisational factors.” The researchers interviewed professionals from the public sector to see which factors are recognized and which are still underexposed. Bias There is room for improvement to make ML systems in the public sector safer. For example, bias in data is still often seen as a technical problem, while the origin of that bias may lie far outside the technical system. Delfos: “Consider, for instance, the registration of crime. In neighbourhoods where the police patrol more frequently, logically, more crime is recorded, which leads to these areas being overrepresented in crime statistics. An ML system trained to discover patterns in these statistics will replicate or even reinforce this bias. However, the problem lies in the method of recording, not in the ML system itself.” Reducing risks According to the researchers, policymakers and civil servants involved in the development of ML systems would do well to incorporate system safety concepts. For example, it is advisable to identify in advance what kinds of accidents one wants to prevent when designing an ML system. Another lesson from system safety, for instance in aviation, is that systems tend to become more risky over time in practice, because safety becomes subordinate to efficiency as long as no accidents occur. “It is therefore important that safety remains a recurring topic in evaluations and that safety requirements are enforced,” says Delfos. Read the research paper .