But if your security kungfu is good, consider the tests you might face by looking at last year’s Defcon qualifying challenges. I barely know enough to cover the security challenges as press. I am not qualified for the security contests, always the highlight of Defcon events. This years DefCon agenda is so good that I decided to attend (Caesars Palace room booked). We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.” So true. The DefCon leader, Rumman Chowdhury, says: “We need thousands of people. Many are appropriate for tech-lawyers, especially those with interest and some knowledge in cybersecurity or artificial intelligence. The activities and agenda they have laid out for Def Con 31 are also impressive. The AI Village non-profit group has a very impressive leadership team. Thousands of hackers are expected to respond and go to Vegas. The Fact Sheet goes on to specifically invite hackers to participate at DEFCON 31 in Las Vegas on August 10–13, 2023, especially in the AI Village component. This White House Fact Sheet encourages white-hat hackers to red-team test vendor’s products to improve the safety and ethics of generative type Ai models. The White House recommendations are made in its Fact Sheet on AI dated May 4, 2023. (By the way, absolutely no Ai was used to write this article, but all images are a joint venture between me, Ralph Losey, and Midjourney.) Even more surprising, they were encouraging hackers to go to the next DefCon in Las Vegas ( Caesars Forum) by the thousands to Red Team test leading Ai software. Shortly after the blog published, I learned that White House advisors on artificial intelligence were of like mind. That is the best way to protect ourselves from unethical Ai. Then we need to report these defects to the software developers, such as Open AI. We need experts to try to get them to hallucinate, to over-ride the safety protocols, and generally say things and give advice that should be forbidden (such as how to build a nuclear weapon, which is one I tested) or is biased. We need hackers to prod, con, trick and manipulate Ai chatbots to jailbreak them. Hallucinating Bot image by Losey/Midjourney My last blog, ‘ A Discussion of Some of the Ethical Constraints Built Into ChatGPT‘ concluded with my encouraging Red Team testing.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |