Filter results

47693 results

The Academic Fringe Festival - Aaron Halfaker: Designing to Learn - Aligning Design Thinking and Data Science to Build Intelligent Tools That Evolve

The Academic Fringe Festival - Aaron Halfaker: Designing to Learn - Aligning Design Thinking and Data Science to Build Intelligent Tools That Evolve 04 April 2022 17:00 till 18:00 - Location: Online by Aaron Halfaker | Microsoft Research Abstract “Design to learn" is a collaborative approach to developing intelligent systems that leverage the complementary capabilities of designers and data scientists. Data scientists develop algorithms that work despite the noisy, messy realities of human behavior patterns, and designers develop techniques that reduce noise by aligning interactions closely with how users think about their work. In this talk, I'll describe a set of shared concepts and processes that are intended to help designers and data scientists communicate effectively throughout the development process. This approach is being applied and refined within various product contexts in Microsoft including email triage, meeting recap, time management, and Q&A routing. Speaker Biography Aaron Halfaker is a principal applied research scientist working in the Office of Applied Research in Microsoft’s Experiences and Devices organization. He is also a Senior Scientist at the University of Minnesota. Dr. Halfaker’s research explores the intersection of productive information work and the application of advanced technologies (AI) to support productivity. In his systems building research, he’s worn many hats from full stack engineer, ethnographer, engineering manager, UX designer, community manager, and research scientist. He’s most notable for building an open infrastructure for machine learning in Wikipedia called ORES. His research and systems engineering have been features in the tech media including Wired, MIT Tech Review, BBC Technology, The Register, and Netzpolitik among others. Dr. Halfaker reviews and coordinates for top-tier journals in the social computing and human center-AI space including ACM CHI, ACM GROUP, ACM CSCW, Transactions on Social Computing, WWW, and JASIST. Homepage: https://www.microsoft.com/en-us/research/people/ahalfaker/ . More information In this second edition on the topic of "Responsible Use of Data", we take a multi-disciplinary view and explore further lessons learned from success stories and examples in which the irresponsible use of data can create and foster inequality and inequity, perpetuate bias and prejudice, or produce unlawful or unethical outcomes. Our aim is to discuss and draw certain guidelines to make the use of data a responsible practice. Join us To receive announcements of upcoming presentations and events organized by TAFF and get the Zoom link to join the presentations, join our mailing list . TAFF-WIS Delft Visit the website of The Academic Fringe Festival

The Academic Fringe Festival - Nithya Sambasivan: The Myopia of Model Centrism

The Academic Fringe Festival - Nithya Sambasivan: The Myopia of Model Centrism 11 April 2022 17:00 till 18:00 - Location: Online by Nithya Sambasivan Abstract AI models seek to intervene in increasingly higher stakes domains, such as cancer detection and microloan allocation. What is the view of the world that guides AI development in high risk areas, and how does this view regard the complexity of the real world? In this talk, I will present results from my multi-year inquiry into how fundamentals of AI systems---data, expertise, and fairness---are viewed in AI development. I pay particular attention to developer practices in AI systems intended for low-resource communities, especially in the Global South, where people are enrolled as labourers or untapped DAUs. Despite the inordinate role played by these fundamentals on model outcomes, data work is under-valued; domain experts are reduced to data-entry operators; and fairness and accountability assumptions do not scale past the West. Instead, model development is glamourised, and model performance is viewed as the indicator of success. The overt emphasis on models, at the cost of ignoring these fundamentals, leads to brittle and reductive interventions that ultimately displace functional and complex real-world systems in low-resource contexts. I put forth practical implications for AI research and practice to shift away from model centrism to enabling human ecosystems; in effect, building safer and more robust systems for all. Speaker Biography Dr. Nithya Sambasivan is a sociotechnical researcher whose work is in solving hard, socially-important design problems impacting marginalised communities in the Global South. Her current research re-imagines AI fundamentals to work for low-resource communities. Dr. Sambasivan's work has been widely covered in venues like VentureBeat, ZDnet, Scroll.in, O’Reilly, New Scientist, State of AI report, HackerNews and more, while influencing public policy like the Indian government’s strategy for responsible AI and motivating the NeurIPS Datasets track. As a former Staff Research Scientist at Google Research, she pioneered several original, award-winning research initiatives such as responsible AI in the Global South, human-data interaction, gender equity online, and next billion users, which fundamentally shaped the company’s strategy for emerging markets, besides landing as new products affecting millions of users including in Google Station, Search, YouTube, Android, Maps & more. Dr. Sambasivan founded and managed a blueprint HCI team in Google Research Bangalore, and set up the Accra HCI team, in contexts with limited existing HCI pipelines. Simultaneously, her research has received several best paper awards at top-tier computing conferences. Homepage: https://nithyasambasivan.com/ . More information In this second edition on the topic of "Responsible Use of Data", we take a multi-disciplinary view and explore further lessons learned from success stories and examples in which the irresponsible use of data can create and foster inequality and inequity, perpetuate bias and prejudice, or produce unlawful or unethical outcomes. Our aim is to discuss and draw certain guidelines to make the use of data a responsible practice. Join us To receive announcements of upcoming presentations and events organized by TAFF and get the Zoom link to join the presentations, join our mailing list . TAFF-WIS Delft Visit the website of The Academic Fringe Festival

Half Height Horizontal

New LDE trainee in D&I office

Keehan Akbari has started since the beginning of September as a new LDE trainee in the Diversity and Inclusion office. What motivated him to work for the D&I office, what does he expect to achieve during this traineeship? Read the short interview below! What motivated you to pursue your LDE traineeship in Diversity and Inclusion office of the TU Delft? I completed both bachelor's and master's degrees in Cultural Anthropology and Development Sociology at Leiden University. Within these studies, my main area of interest was in themes of inclusion and diversity. After being hired as a trainee for the LDE traineeship, and discovering that one of the possible assignments belonged to the Diversity and Inclusion office, my choice was quickly made. I saw this as an excellent opportunity to put the theories I learned during my studies into practice. What specific skills or experiences do you bring to the D&I office that will help promote inclusivity on campus? I am someone who likes to connect rather than polarize, taking into account the importance of different perspectives and stakeholders. I believe that this is how one can achieve the most in fostering diversity and inclusion. You need to get multiple parties on board to get the best results. What are your main goals as you begin your role here, and how do you hope to make an impact? An important goal for me this year is to get students more involved in diversity and inclusion at the university. One way I will try to accomplish this is by contributing to the creation of D&I student teams. By establishing a D&I student team for faculties, it will be possible to deal with diversity- and inclusion-related issues that apply and relate to the specific department. How do you plan to engage with different (student) communities within the university? Since I am new to TU Delft, the first thing I need to do is expand my network here. Therefore, I am currently busy exploring the university and getting to know various stakeholders. Moreover, I intend to be in close contact with various student and study organizations to explore together how to strengthen cooperation on diversity and inclusion. Welcome to the team Keehan and we wish you lots of success with your traineeship!

Researchers from TU Delft and Cambridge University collaborate on innovative methods to combat Climate Change

For over a year and a half, researchers from TU Delft and the Cambridge University Centre for Climate Repair have worked together on groundbreaking techniques to increase the reflectivity of clouds in the fight against global warming. During a two-day meeting, the teams are discussing their progress. Researchers at Cambridge are focusing on the technical development of a system that can spray seawater, releasing tiny salt crystals into the atmosphere to brighten the clouds. The team from TU Delft, led by Prof. Dr. Ir. Herman Russchenberg, scientific director of the TU Delft Climate Action Program and professor of Atmospheric Remote Sensing, is studying the physical effects of this technique. Prof. Russchenberg emphasizes the importance of this research: "We have now taken the first steps towards developing emergency measures against climate change. If it proves necessary, we must be prepared to implement these techniques. Ideally, we wouldn't need to use them, but it's important to investigate how they work now." Prof. Dr. Ir. Stefan Aarninkhof, dean of the Faculty of Civil Engineering and Geosciences, expresses pride in the team as the first results of this unique collaboration are becoming visible. If the researchers in Delft and Cambridge can demonstrate the potential of the concept, the first small-scale experiments will responsibly begin within a year. This research has been made possible thanks to the long-term support from the Refreeze the Arctic Foundation, founded by family of TU Delft alumnus Marc Salzer Levi . Such generous contributions enable innovative and high-impact research that addresses urgent global challenges like climate change. Large donations like these enable the pursuit of innovative, high-impact research that may not otherwise be feasible, demonstrating how our collective effort and investment in science can lead to real, transformative solutions for global challenges like climate change. Climate-Action Programme

How system safety can make Machine Learning systems safer in the public sector

Machine Learning (ML), a form of AI where patterns are discovered in large amounts of data, can be very useful. It is increasingly used, for example, in chatbot Chat GPT, facial recognition, or speech software. However, there are also concerns about the use of ML systems in the public sector. How do you prevent the system from, for example, discriminating or making large-scale mistakes with negative effects on citizens? Scientists at TU Delft, including Jeroen Delfos, investigated how lessons from system safety can contribute to making ML systems safer in the public sector. “Policymakers are busy devising measures to counter the negative effects of ML. Our research shows that they can rely much more on existing concepts and theories that have already proven their value in other sectors,” says Jeroen Delfos. Jeroen Delfos Learning from other sectors In their research, the scientists used concepts from system safety and systems theory to describe the challenges of using ML systems in the public sector. Delfos: “Concepts and tools from the system safety literature are already widely used to support safety in sectors such as aviation, for example by analysing accidents with system safety methods. However, this is not yet common practice in the field of AI and ML. By applying a system-theoretical perspective, we view safety not only as a result of how the technology works, but as the result of a complex set of technical, social, and organisational factors.” The researchers interviewed professionals from the public sector to see which factors are recognized and which are still underexposed. Bias There is room for improvement to make ML systems in the public sector safer. For example, bias in data is still often seen as a technical problem, while the origin of that bias may lie far outside the technical system. Delfos: “Consider, for instance, the registration of crime. In neighbourhoods where the police patrol more frequently, logically, more crime is recorded, which leads to these areas being overrepresented in crime statistics. An ML system trained to discover patterns in these statistics will replicate or even reinforce this bias. However, the problem lies in the method of recording, not in the ML system itself.” Reducing risks According to the researchers, policymakers and civil servants involved in the development of ML systems would do well to incorporate system safety concepts. For example, it is advisable to identify in advance what kinds of accidents one wants to prevent when designing an ML system. Another lesson from system safety, for instance in aviation, is that systems tend to become more risky over time in practice, because safety becomes subordinate to efficiency as long as no accidents occur. “It is therefore important that safety remains a recurring topic in evaluations and that safety requirements are enforced,” says Delfos. Read the research paper .