The Use of AI Transcription in Assessments

Hi All

I have been looking into the use of AI in the social care sector, as i have very specific concerns, both legal and ethical, about how certain AI tools are being used for social care assessments, especially when these assessments could produce/lead to decisions with legal effects concerning a person. By way of analogy see Part IV of this report Legal Opinion: The use of Artificial Intelligence tools in asylum cases | Open Rights Group, which helpfully explains the potential legal issues in play and also Scribe and prejudice? | Ada Lovelace Institute which outlines the potential ethical issues for the social care sector… Is anyone willing to share, no need to name the AI provider or where you work, whether you have found that AI is being used in your place of work, not just as a note-taking aid only, but specifically for generating assessments, based on a transcription of a recorded conversation?

Thanks all :slight_smile:

1 Like

I tried AI ( Google paid for services so excluded from harvesting and data training ) for reporting on a case file. I obtained consents ( complaint investigation )and initial impression was to tweak a few summaries in what was a well worded response ( excellent in fact ) but for reporting one family member as deceased when they recovered from an illness ( we later worked this error out as a combination of AI word confusion and data prediction ). One word easily overlooked when busy - not looked at it since except to model questions and identify gaps.

Thanks Del - your observation Re the error/hallucination/inaccuracy is interesting, thanks for sharing.

definitely being used and promoted in my LA

Can you share for what social care assessments it can be used, is it pure transcription or does it provide ‘inference’ (conclusions) into the assessments and have you received any guidance on ‘special category data’ and explicit consent in relation to its use?

It’s being used for Care Act assessments. I used it a few times and did not like it so have stopped using it. I believe it has now extended to other assessments such as MCA, DoL’s BI and probably more. It provides a transcript, which you can edit. I don’t believe it creates conclusions. We cannot use it without consent, but don’t think we have a formal consent form that incorporates AI, so it would just be verbal consent.

Thanks Elizabeth for sharing.

Hi Elizabeth - do you mind if i ask what you didn’t like about the ‘AI tool’? And was the transcript like a Teams transcript so “P said ….” and “Y said…” or was it more than that? Something like this which is a test AI example:

“X was able to consider some of the benefits and drawbacks of the different care options. She acknowledged, “I wouldn’t do it myself. And you wouldn’t forget. And I do forget things. I know that” when discussing the benefit of extra care. She also recognised the risk of forgetting to eat: “I’d get home really long… You get ill. You might even have to go to hospital, which wouldn’t be good. You don’t want that, do you? No.” However, when asked to weigh the options and make a decision, X often deferred to others, saying, “I don’t mind,” “I’m not sure actually,” and "What would you think about if I had a chat to ‘Y’ about what they think?.. If son thinks one particular option, should we go with what son thinks? Yeah. Yeah?” This suggests some difficulty in independently weighing the information and making a decision, and a tendency to rely on others to decide for her.”

It is more like the second example you gave. When I used it, despite placing the receiver in close proximity, it missed a lot of the conversation. Also I felt it removed my professional judgement, which I know I can add, but it felt as if I was relying on it to take me in the right direction. Lots of people are using it with great success though. The view was that it would make us more productive etc. I was concerned that it would make us lazy and we could miss things.

Thanks for sharing Elizabeth… Read the Scribe and Prejudice article i listed above as you will find that it supports your concerns… Thanks again.

I see that Social Work News is posting similar line of thought ( albeit with the depth of research and professional insight offered here ) - no criticism of SWN intended.

AI is not AI. Cars are not just cars. A Reliant is not a Rolls Royce obviously. As I will not be allowed to name the AIs with transcription features, it is difficult to compare the performance of ‘donkeys’ with ‘race horses’ (both being animals I dare say). No insults intended to humans or animals.

I attempt to share my knowledge and experience of AI with and without transcription - for business and non-business use cases. This is at the risk of someone going, “Are you an AI expert?” In which case - as I’m not an AI expert my thoughts may be disregarded.

The General Situation with AIs:
Some obvious points first need to be said because some people (not on these forums) appear not to know (actions speaking louder than words)

  1. AI has no expertise in any field.
  2. AI may have facts if given information.
  3. AI has power to extract a simulation of meaning from words by statistical analyses of patterns of human use of words, imagery, mathematics etc.
  4. The above does not mean that AI understands its out put. Its understanding if applied to it, is simulated.
  5. AI can appear creative but any such thing is mathematical and simulated.
    [None of the above means that AI cannot make new discoveries]

Transcription:
As a result of analyses of human languages AI can do a great brute force job at transcribing words and sentences better than most humans who learn say 2000 words in each language. But the output is not an ‘understanding’ because AI does not have true grasp of nuance and context.

But ‘race horse’ type AI can sense breathing patterns, hesitations, emphases and stress in spoken words - again by a mathematical process. That does not mean it ‘knows’ whats happening in the job of transcribing recorded conversation.

Race horse type AI’s can also mathematically analyse video imagery to obtain a better simulated context of what’s happening in a transcription session.

In one non-business scenario - an AI on my phone - was allowed real time video of what I was seeing to assist me finding and changing a fuse in a tight compartment in my car. I was talking to the AI and the AI was responding by voice with pretty good ideas and instructions. This allowed me to succeed in my task, instead of spending money to get some car electrician out to do the job - probably at some cost. But none of that means that AI has expertise or understanding of anything. I assert that ‘people’ take simulated expertise and understanding as he real thing.

Back to transcription. The donkey type AI on many occasions made a mess repeatedly and transcribing conversation. A human - the usual ‘admin’ in the NHS - would have had to spend approx 4 hours trying to make sense of what was transcribed. Waste of time - this was abandoned.

Considerations and my opinions on AI for professional purposes:

  1. AI generated content and analyses must never be allowed wholesale into records.
  2. A human - professional - must always be responsible for any AI generated work.
  3. Whether or not it is declared ‘AI assisted’ - a human being is responsible.
  4. Human beings should be sacked from their positions for AI mess ups - to get the message home - yuh know make it personally important.
  5. Any form of AI generated ‘assessment’, ‘evaluation’ or ‘summary’ is owned by a human.

Poor use of AIs for any purpose:

  1. Not supplying AI with sufficient training and context of materials - sackable behaviour in IMHO.
  2. Not giving sufficient or adequate instructions - root awareness - and human context.
  3. Copy and paste without proofing, fact checking and correcting all errors.
  4. Humans blame shifting onto AI or ducking behind AI-generated - should be immediately sackable.

The bottom line:

  1. AI is a tool (so are pliers, blenders, cookers, cars, computers etc etc etc).
  2. The tool has no true knowledge, skill and experience.
  3. Use tool the wrong way and one risks causing serious harm or loss.

The above was purely human generated. I passed it through two of my well informed AIs programmed for my business use cases. They agree with me. You might say ‘they’re programmed to agree with you’ - and if you do - then the AI agreement is irrelevant.

Thanks Russell, I read with interest and thanks for sharing your perspective. A lot of the points you are making are similar to points i am making in the rough draft of an article i am hoping to publish on this issue, with a specific focus on the potential legal implications. I liked in particular these two points, “A human - the usual ‘admin’ in the NHS - would have had to spend approx 4 hours trying to make sense of what was transcribed. Waste of time - this was abandoned”, I have dyslexia and i find it harder and a far longer process checking AI transcription than just getting on with it and writing it myself, and “AI generated content and analyses must never be allowed wholesale into records” - I could not agree more with this point, and i think this has significant legal implications as well.

A sentence from my draft article seems to go along with some of your train of thought “This is not simply a question of ethics or innovation; it is crucially a question of legal literacy within the profession. The risk is not that practitioners are careless, more that these systems are being pitched, purchased and deployed faster than the profession can interpret the legal implications.”

I agree. However - not a suggestion for your article - but my thoughts.

  1. ‘Careless’ - is a euphemistic term to cover widespread ‘stupidity’ and ‘laziness’. Did I insult any individual? I did not! In a predominantly woke world these days, one may be sanctioned for calling it exactly what it is.
  2. There are other forms of ‘carelessness’ in health services that have nothing to do with AI.
  3. But wherever one sees ‘carelessness’ - look two tiers of management higher, I always say - and if you (aka me) really do look higher you will most probably find a) non-existent management or b) abysmal management. [Management includes supervision].
  4. I don’t think ‘literacy’ is a powerful force again ‘carelessness’ (as I conceptualise it). How? I extrapolate from - no less than psychiatry itself - where literacy is very high an ‘carelessness’ leaves behind a litigation time-bomb waiting to explode.
  5. One can educate people - till the cows come home - what they do after the literacy is quite a different matter. What people do when no one is looking is most important. That’s where ‘carelessness’ grows. Says who? Says nearly every major public inquiry into some debacle or the other. You can start with the Nottingham Inquiry and work backwards if you are so inclined.
  6. The software industry is currently slamming into a velocity trap, where a massive spike in initial output creates a long-term mountain of work. Because AI generates opaque code that lacks coherent human logic, the foundation of a project starts suffering from architectural decay almost immediately. Instead of making things easier, senior staff are being turned into AI janitors, spending their days refactoring machine-made messes and digging out of technical debt that compounds faster than they can fix it. This means that high-speed progress on a spreadsheet, but “soft errors” in the logic leads to bloated systems, and an unstable mess that eventually requires more human hours to stabilise than if they’d just written it properly the first time. [I don’t have space enough to unpack and de-jargonise this. One does not need to a Software Expert to appreciate the problem]. The point is that if this is happening in a world where the code is visible as well as the outputs - better than real time compared to legal, health and social care, then one can expect more trouble in those respective domains.
  7. I say that any failures of use of AI (legal, health and social care) will be structural and related primarily to systemic issues. Hence I project - not predict - based on patterns elsewhere that waves of debacles over the next 10 years from the use of AI. Your government - seemingly obsessed that AI will cut costs and improve efficiency - has a rude surprise coming. I called it.

But someone may come along and say the usual “Not necessarily!” Wait for it. It’s coming. But did I say ‘necessarily’? I did not. I made probabilistic projections. And some may think I was generalising from one set of experiences in the Software Industry. Well, I don’t have time nor will I be given space to give a 5000 word ‘evidence-based’ argument.

History has shown that I am more times right than I am wrong. But nobody except me knows that. Now you know why I am limited to only 3 responses in any thread. The evidence speaks for itself. So - one response left - if my count is right.