1. <li id="egwb6"></li>

      <div id="egwb6"><span id="egwb6"><u id="egwb6"></u></span></div>
      <div id="egwb6"><strike id="egwb6"><kbd id="egwb6"></kbd></strike></div>

      <thead id="egwb6"></thead>

          1. 2019-05-29 | Lingxiao Huang:Fairness in Automated Decision-Making Tasks

            2019-05-29   

            Abstract

            Automated decision-making algorithms are increasingly deployed and affect people's lives significantly. Recently, there has been growing concern about systematically discriminate against minority groups of individuals that may exist in such algorithms. Thus, developing algorithms that are "fair" with respect to sensitive attributes has become an important problem.

            In this talk, I will first introduce the motivation of "fairness" in real-world applications and how to model "fairness" in theory. Then I will present several recent progress in designing algorithms that maintain fairness requirements for automated decision-making tasks, including multiwinner voting, personalization, classification, and clustering.

             

            Time

            529周三)10:00-11:00

             

            Speaker

            Lingxiao Huang is a postdoc of computer science in EPFL, where he is advised by Nisheeth Vishnoi. He joined EPFL in 2017, after received his Ph.D. in IIIS, Tsinghua University.

            His current research interest is algorithm design in machine learning and social science. He is passionate about creating novel algorithms that are motivated by existing practical challenges.

             

            Venue

            信息管理与工程学院602

            上海财经大学(第三教学楼西侧)

            上海市杨浦区武东路100