A request during my October visit to my family doctor’s office surprised me. My permission was sought to allow my private conversation with my care giver to be recorded. It was explained that this tape would be converted to text on their computer system by an Artificial Intelligence software they are utilizing. I hesitated briefly before giving my consent. After all, verbal dictation isn’t perfect either. Right?
That evening, as I tried to settle into sleep mode, these questions romped through my mind: What if a blooper in software AI transcription resulted in an error on a health diagnosis or plan of treatment action? Could this private conversation about my health issues be hacked? Are doctor/patient recorded discussions revised before storage to edit out small talk? What if I made some off-colour or humourous comment? Would that be recorded on my health file? You can see where I’m going with this concern. Another grey area of AI usage surfaced as I tossed and turned. As a published fiction and non-fiction author, I’ve been following concerns posted on social media by fellow writers. Many have had their books pirated and used in on-line AI data bases for others to copy when they have writer’s block. This illegal practice steals royalty payments due authors and allows others to take credit for their work. I’m struggling to keep an open mind and enjoy learning more about the tremendous benefits that AI is introducing for the benefit of mankind. However, I sincerely hope regulations are put in place to ensure these incredible artificial intelligence advancements are not misused. Since I previously aired my trials and tribulations over allowing, on a trial basis, an additional AI tool into my home to buddy up with my iPhone and iPad, I wanted to share my latest AI experience quandary. Click to read my article, “Home Assistant or Privacy Invader?” that was first published in a Canadian magazine, Cloud Lake Literary, in Volume 5 on March 14/23.
No comments:
Post a Comment