Here is the issue,
- The public understanding of AI is low while risk apprehension is high
- Worldwide, numerous reports highlight as contentious issues harm (safety), intrusion (privacy), opaqueness and manipulation (transparency), discrimination (fairness), and exploitation or oppression (accountability and oversight)
and we have seen consequences, for example,
- Technology is perceived as risky
- Government regulation is pre-emptive
- Businesses tend to prefer low-risk proof-of-concept work
- Potential activated only by the few with large expert teams
- Practitioners moving sideways as career path not developed
- Lack of standards to identify proficiency levels to senior, lead, and principal
Can we do something about it?
Join us for 60 minutes to discuss quality standards for the profession and for the industry as a whole.
Proof-of-concept
In 2021, the AI Guild starts to accredit top-level experts in data analytics and machine learning. With 800+ members, it is Europe's leading practitioner community and has a track record of supporting #datacareers and facilitating the deployment of data analytics and machine learning. You can find out more at https://www.theguild.ai
Accreditation
We are piloting a quality standard with senior industry experts (5 women, 7 men). The pilot covers domains such as data analytics, data science, and data engineering; and machine learning, deep learning, natural language processing, and computer vision. Practitioners are drawn from industry, startups, and universities, from a variety of countries in Europe, South and North America, and Asia.
The 60-minute format
- 20 minutes introduction to quality standards, packed with experience and insight
- Switch to a Q&A format, with further discussion for the remaining time
- Back-up with extensive research (publication, presentation)