I just got beta access to Google Bard.
As you know, the privacy properties of ChatGPT have been reviewed by the Italian privacy protection agency Garante.ResearchAs symbolized by the above, there are quite a few questions. One of them is the handling of information about living people, especially the problem of creating false information to answer questions. This raises questions about the legality of such actions under global privacy regulations such as GDPR.
So I put ChatGPT and Google Bard to the test.
To test, I asked for information on two living individuals: one relatively unknown but whose documents are still searchable and available on the internet, and one well-known figure: myself and Elon Musk.
Here is the response from ChatGPT:

You've answered the question thoroughly. And it's full of inaccurate information about me.1Responding with such "fake" information can be a serious violation of privacy and should be taken seriously.
To which Google Bard responded:

It may not be much fun for people, but maybe that's the way it should be. You shouldn't make up answers for living people.
Actually, on the 18th, I didn't know how Bard would respond, but I was having lunch with a man who used to work for the data protection authority of a German state and now runs a privacy law firm, and we were discussing the issue. We had just said that the only possible response would be to filter out personal information, as Bard has done, and not return it.2In that sense, it's a very interesting case. On the other hand, I wonder what ChatGPT plans to do if they receive a request to disclose personal information. Whether they say they don't have it or they do, it seems like they'll have a tough road ahead.
This round is a clear win for Google Bard. It will be interesting to see how it develops in the future.
1 Reply to "Comparing the privacy properties of ChatGPT and Google Bard: On the properties of disseminating misinformation about living individuals"