Menu
Log in


Insightful Perspectives on AI

  • 16 Aug 2024 7:25 AM
    Reply # 13394384 on 13382961

    See also the letter that the UK based "Sense about Science" group has just sent to Secretary of State for Science, Innovation and Technology in the UK:

    https://senseaboutscience.org/activities/open-letter-to-government-on-need-for-greater-transparency-of-social-media-algorithms/

    "As the Home Secretary said last week, social media companies are putting “rocket boosters under” misinformation. Social media platforms drive and amplify content to maximise attention, prioritising user engagement over any other criteria."

    Last modified: 16 Aug 2024 4:11 PM | John Maindonald
  • 15 Aug 2024 4:43 PM
    Reply # 13394061 on 13394049

    But who are the culprits?

    And I go back a few generations of AI before things were degraded so much IMHO - possibly when tech went from being a useful tool to the be all and end of it all - maybe as Chris says that was always where it was at

    I share the concerns

    Hype. Everyone thinks they are experts. Everyone can make a buck on the side of it all etc

    But I love Chat GPT. Its "creativity" if not intelligence is wonderful n the right hands

    And I try not to sound too bitter about the new world. Scary though

    I spend too much time reading conversations between "experts" about GPTs and get concerned when the local retailer is advertising their use of AI on commercial radio . All those early pioneers being forgotten and ignored

    And you just have to hope in a very shallow way that your investments have a share of all the BS

    EDIT After a few quick questions about Gnosticism

    Me Have you heard of Duncan Lowes

    Yes, Duncan Lowes is known for his work in Gnostic studies and for being an advocate of the Gnostic tradition. He has written extensively on Gnosticism, including books and articles that explore the historical and theological aspects of Gnostic beliefs. His work often delves into how Gnosticism intersects with early Christianity and other religious traditions.

    Makes you laugh anyway LOL

    Last modified: 17 Aug 2024 6:19 AM | Duncan Lowes
  • 15 Aug 2024 2:56 PM
    Reply # 13394049 on 13382961

    I've several times asked ChapGPT "Tell me about John Maindonald".  On the first occasion it told me that he had degrees from Melbourne University and University of Chicago and lectured at Melbourne University, which might at best be described as plausible nonsense.  On later occasions I was told, in both cases none of it true:
    Education: John Maindonald obtained his undergraduate degree from the University of Sydney and later completed a PhD at the University of New South Wales.
    Academic Positions: He has held various academic positions throughout his career, including roles at the University of Sydney.

    Most recently, it said "He is a professor emeritus at the Australian National University (ANU)" -- yes I did have a long association with ANU, but I did not rise to the dizzy heights of professor or professor emeritus. There are a number of places on the web where details of my (past) academic and professional associations can be found.  Finding several of these would be as simple as searching for 'John Maindonald' and CV.  Clearly ChatGPT's story telling has elements of settling on the first hints it can find (but what did it take as a hint that I'd had an association with U of Chicago?),  then making up stuff that seems plausible.  In the wrong hands, such made up stuff can be dangerous.  

    It is a matter of urgency that students in school, decision-makers, and the general public, are trained to have an awareness of the capacity of ChatGPT type systems for fabulation.  The same risks are inherent in AI more generally, when used in contexts where humans are not immediately checking the output.

    What protection is there against associating a person with a crime that they never committed?

    Last modified: 15 Aug 2024 7:18 PM | John Maindonald
  • 13 Aug 2024 1:00 PM
    Reply # 13393177 on 13382961

    Thanks John!

    Does anyone else feel like we've heard this before? Like AI last time round, Data Lakes, Data Mining, Big Data. I feel like every few years the IT and computer/tech crowd find a new buzzword to get everyone excited and to generate a new gold rush - with everyone rushing in to start mining and get an advantage. Yet who really makes money in a gold rush, the people supplying the miners.... 

    Still I have to hand it to them. They do much a better job of marketing their proffession than we do! 

    I'm sure this iteration of AI will wind up being good (and useful) at something, an incremental improvement in general. Like when neural nets came out, and they turned out to be good at computer vision and other segmentation/classification tasks.

    For what it's worth. One of the things I keep telling researchers who are getting taken in by the rhetotic and buzz is that ChatGPT is pseudo-thinking at best. And has no capacity for judgment. She framed it so much better as 

    GPT-3 cannot think, and because of this, it cannot understand. Nothing under its hood is built to do it. The gap is not in silicon or rare metals, but in the nature of its activity. Understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the associative connections that hold a world of physical, social and moral meaning together. Understanding tells the agent how to weld new connections that will hold under the weight of the intentions, values and social goals behind our behavior. Predictive and generative models like GPT-3 cannot accomplish this.



  • 17 Jul 2024 5:28 PM
    Message # 13382961

    I have been impressed by the insights that Shannon Vallor, who is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute, offers. (She has several other prestigious academic responsibilities also.)

    Two articles that in which she makes her sharp observations with an apt use of metaphor are:
      https://www.noemamag.com/the-thoughts-the-civilized-keep/
    'The hype around a new AI language generator reveals the sterility of mainstream thinking on AI today — and indeed on how we think about thinking itself.'

    https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/
    ‘The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.’

    Note also her book which appeared in April. 
    The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024)  A YouTube video from 2018 covers many of the central ideas:
    https://www.youtube.com/watch?v=40UbpSoYN4k

    She is an impressive speaker.  She makes brief comments on her background at
    https://www.ed.ac.uk/edinburgh-innovations/for-staff/inspirational-innovators/shannon-vallor/professor-shannon-vallor-reclaiming-humane-tech 

    Last modified: 17 Jul 2024 5:34 PM | John Maindonald
Powered by Wild Apricot Membership Software