Objectives
The development of artificial intelligence (AI) must provide human-centric and ethical operation, transparency, and respect for fundamental rights. This course concerns the application of the law and ethics to AI. It addresses such topics as AI and human rights, privacy protection (GDPR) and cybersecurity, responsibility and liability, non-discrimination, intellectual property, and safety rules. Besides its apparent advantages, AI entails a number of potential risks, such as opaque decision-making or being used for criminal purposes. The human factor, the process of machine learning in case of algorithms and automated decision-making, handling uncertainties may lead to discriminative practices. AI technologies may present new safety risks for users when they are embedded in products and services. Moreover, the classical legal liability systems need to be rethought, in particular, because the absence of precise and clear statutory provisions may undermine legal certainty. Data protection has been the area of the law that has most engaged with AI. The objective of the course is to introduce the students to the legal environment of AI, including especially the basic principles and guidelines and the present and possible future framework of these laws (e.g., the Artificial Intelligence Act of the EU). The course helps recognise and mitigate the risks, know the accountability and governance implications of AI, and what we need to do to ensure lawfulness, fairness, and transparency in AI systems. The course is part of the Human Centred Artificial Intelligence Masters (HCAIM) programme.
Academic results
Knowledge
- The students are aware of
- - the social and economic functions of legislation
- - the basic functions of the main areas of law affecting technology, responsibility, safety
- - the main features of the legal, economic and business mechanisms that can influence technology, responsibility, safety
- - relevant approaches to illustrate the impact of regulators on certain questions of artificial intelligence methods
- - aspects of analysis of legislation affecting artificial intelligence.
Skills
- The students are able to
- - properly interpret and place rules in practice
- - analyse the role, motivations and activities of individual economic actors from a legal and economic point of view
- - grasp a multi-faceted context system for modeling public policy strategy planning in relation to the topic
- - to critically analyse the benefits and risks of artificial intelligence from a legal perspective.
Attitude
- The students
- - are well aware in the assessment of the legal regulation of the artificial intelligence, is informed by various sources, consciously seeking alternative solutions
- - are open to self-reflection, critical reception, and critical thinking when thinking about regulation of artificial intelligence
- - are open to critical self-assessment, based on activities, active, learning methods, experimental style
- - adopt as a starting point for regulation the implementation of legal standards and requirements.
Independence and responsibility
- The students
- - are open to accept reliable critical remarks,
- - are able to solve practical professional problems independently.
Teaching methodology
Lectures and written communication, use of ICT tools and techniques.
Materials supporting learning
- Előadások, kommunikáció írásban és szóban, IT eszközök és technikák használata. Lectures and written communication, use of ICT tools and techniques.
General Rules
Assessment of the learning outcomes described under 2.2. is based on two written tests. A further condition is that the student attends 70% of the lectures. The completion of the course requires a minimum of 50% on the basis of the aggregate results of the mid-term exams.
Performance assessment methods
A. Detailed description of assesments during the term: Complex, written assessment of competence-type competence elements in written form. The thesis may con-sist of test questions, which are the interpretation of certain concepts and the recognition of their interrela-tions; essay questions examining lexical knowledge and synthesizing ability. The available working time of 30-90 minutes. An additional 20% of the marks can be obtained by completing the assigned tasks.
Percentage of performance assessments, conducted during the study period, within the rating
Percentage of exam elements within the rating
- Részteljesítmény értékelés (házi feladatok): 20
- Zárthelyik: 80
Conditions for obtaining a signature, validity of the signature
Assessment of the learning outcomes described under 2.2. is based on two written tests. A further condition is that the student attends 70% of the lectures. The completion of the course requires a minimum of 50% on the basis of the aggregate results of the mid-term exams.
Issuing grades
% | |
---|---|
Excellent | 91-100 |
Very good | 85-90 |
Good | 76-84 |
Satisfactory | 63-75 |
Pass | 50-62 |
Fail | 0-49 |
Retake and late completion
1) The exams (midterms) will be corrected within the deadline set by the study and examination rules and will be officially published via Neptune. The Department publishes the date of the inspection on a case- by-case basis. 2) It is possible to improve the mark acquired during the year according to the study and examination rules
Coursework required for the completion of the subject
Nature of work | Number of sessions per term |
---|---|
participation in contact classes | 28 |
homework | 10 |
Preparing for the mind-term exam | 52 |
Total | 90 |
Approval and validity of subject requirements
Topics covered during the term
Subject includes the topics detailed in the course syllabus to ensure learning outcomes listed under 2.2. can be achieved.
Lecture topics | |
---|---|
1. | Introduction: Law, regulation, and technology |
2. | Ethical and legal frameworks for AI: ethical guidelines, international and EU regulation |
3. | AI through a human rights lens: The problematic characteristics of AI systems from a legal perspective |
4. | Automatic decision-making and data bias, data discrimination |
5. | Artificial Intelligence Act of EU and risks in specific areas of AI application |
6. | Mid-term exam I. |
7. | Data protection and security |
8. | AI and intellectual property law I. |
9. | AI and intellectual property law II. |
10. | Accountability, liability for AI - AI and civil liability: damage caused by AI, product liability |
11. | AI and criminal law |
12. | Algorithmic manipulation and discrimination of consumers and markets: AI and consumer law, competition law |
13. | Case studies |
14. | Mid-term exam II. |
Additional lecturers
Name | Position | Contact details |
---|---|---|
Dr. Ambrus István | ||
Dr. Mezei Kitti | ||
Dr. Nagy Krisztina | ||
Dr. Schubauer Petra | ||
Dr. Tomasovszky Edit | ||
Dr. Timár Adrienn |