Should we believe they will keep their word? According to the latest article tech giants have pledged to use AI responsibly. The article on Axios (a site I'm not very familiar with) is interesting:
Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front.
Suppose for sake of argument we think of AI responsibility as we think of data responsibility? Can we say that the tech giants have been responsible in managing our data? Can we say tech giants have been responsible with regard to privacy? There is plenty of reasons why many people feel uncomfortable trusting certain tech companies. In fact if we look in the news we are constantly hearing about large breaches, leaks, and other signs of systemic insecurity.
At the same time tech companies are lobbying, have their own responsibility to protect their ability to profit for their shareholder owners. It's because of this responsibility to protect profitability for shareholders which lends me to believe that perhaps this profit motive could get in the way of protecting human rights, or human dignity, or privacy, or being responsible with AI.
Innovation in my opinion is critical and bad regulation doesn't help anyone. At the same time over regulation is also not the answer. But do we have evidence of effective self regulation in the tech industry right now on anything? Then of course there is the issue of all that data, all that AI, being spun up into one industry, or even just a few companies.