I agree with a lot of what you said, but perhaps I am more glass half full, so I don’t think it is “widespread ‘stupidity’ and ‘laziness’”, I think it is more techno-naivety. At a management level I think there is a lack of understanding of the legal issues in play and an honest desire to assist frontline practitioners with their work loads. So, when a flashy ‘AI Company’ say’s “hey we have a tool that writes your assessments” and “we are UK GDPR compliant” (not explaining that this doesn’t mean the LA or NHS necessarily are for using the AI tool in practice) then they think great lets trial and role it out… And i do believe this is done with good will, but is unfortunately filled with techno-naivety and a lack of legal literacy.
At a frontline level, professionals receive very limited or no training on either the legal or ethical issues in play and they are told by their managers ‘here is this new AI tool - its fine to use in assessments, just tell people its like a Dictaphone’. Except, it is nothing like a Dictaphone and consent for handling Special Category Data under UKGDPR can’t just be valid but must be explicit - I am not even sure that there is agreement locally or nationally on what ‘explicit’ consent would look like for use of these AI tools in social care assessments, especially for assessments that produce/lead to decisions with legal effects concerning a person.… So, they assume, because they don’t have the time to reflect (although some have, see Elizabeth’s reflections above for example), this is fine to use, I am the ‘human in the loop’ (research shows this doesn’t work as a ‘safeguard’) - and then we end up with ‘generic’ assessments, filled with hallucinations and errors that take longer to check than write in the first place.
One last observation - what i found most interesting is how these LLM AI tools can’t deal with the ‘human’ element of ‘social’ care assessments. It can pretty accurately deal with your typical ‘care’ issues (still errors and omissions though in most assessments) from a transcript, but it completely looses the ‘humanity’ within the real-world interaction… I would love to see some research on how service user interactions change between non-AI recorded and AI recorded assessments.