Kashyap is a published AI and Deep Learning Researcher and patent author who has performed AI internships from ed-tech startups to multi-billion dollar firms to NASA. He’s also presented a demo on Deeply Inclusive AI at the United Nations AI for Good Global Conference at Geneva, Switzerland, as part of IVOW, and at a TEDx Conference in Cape May where he presented on cultural AI. He was the Dean of the Princeton School of AI where he taught over 780 students in the local community and had 18,000 students across 152 countries as part of his online AI course.
Abstract
100 Billion Dollars. That’s how much AI spending is projected to rise to in a mere 3 years. With such rapid investment in a next-generational technology, its safety becomes of heightening concern to everyone that comes in touch with it from customers to vendors alike. Questions about the AI models safety across any potential security risks, particular bias towards a minority group, a lack of transparency in the decision-making process, or compliance violations from the AI environment are raised, and don’t have a proper answer. These questions don’t just apply to a specific vertical but it’s domain-agnostic and ensuring that a businesses AI is safe is what will set them apart from their customers. This talk will explore what AI safety is, why we need it, and how to ensure that one’s AI model is safe both technically and procedurally.
Artificial Intelligence
Robotics
Automation
Artificial Intelligence in Healthcare
Artificial Intelligence in LAW
Artificial intelligence for Business and Industries