ChatGPT is not an Expert

ChatGPT was very confident, but also wrong.

For the second time in two weeks, a person contacted me to ask me if I would testify to the results of an "analysis" conducted by ChatGPT. In this matter, the individual asked ChatGPT to analyze some documents. ChatGPT provided the answer the person wanted, and then generated a confidence value that was entirely fabricated. Neither the analysis nor the confidence level is based in fact.

ChatGPT and other AI tools like it can sound very convincing. Every week, I see stories of people relying on incorrect results from an AI tool.

There are many great uses of AI, but you cannot just trust the outputs. The outputs must be verified.

The problem for the public is that these AI tools excel in certain areas. There is already some evidence that AI is replacing jobs. Those abilities are not uniform across all skills and industries. The public, who may not be paying as close attention, simply sees a tool that works in some situations and trusts that it will work in all situations.

AI is likely to continue causing headaches in the law. The problems will evolve over time as AI develops, but currently, it is not a replacement for experts.

Previous
Previous

The legal field is not ready for AI.