Wednesday, May 15, 2024

StarRescue: the Design and Evaluation of A Turn-Taking Collaborative Game for Facilitating Autistic Children's Social Skills | Proceedings of the CHI Conference on Human Factors in Computing Systems

StarRescue: the Design and Evaluation of A Turn-Taking Collaborative Game for Facilitating Autistic Children's Social Skills | Proceedings of the CHI Conference on Human Factors in Computing Systems


should I send this 

My name is Nick Dalton. I am the man who asked you the two questions about something you struggled with. I think you partially understood what I was saying, but then you digressed. This was likely your first CHI, and I didn't want to make it more uncomfortable for you than I already had, so I didn't pursue the issue.


I want to respectfully and academically challenge your paper.


The session you were in was titled "Assistive Technologies for Learning and Information with Neurodiversity." You used the term "neurodiversity" in your introduction.


Ten years ago, I wrote a number of papers introducing the word "neurodiversity" to the human interaction community, and I want to follow up on that.


The term "neurodiversity" is attributed to Judy Singer, an Australian sociologist who is herself on the autism spectrum. She first used the term in the late 1990s. Singer aimed to shift the discourse around neurological differences from one of pathology and deficit to one that recognizes and values the natural diversity of human brains and minds. Her work, along with that of journalist Harvey Blume, who also used the term around the same time, helped lay the foundation for the neurodiversity movement, which advocates for acceptance, accommodation, and appreciation of neurological differences as natural variations within the human population.


The term "neurodiversity" originated from a self-advocacy movement initially of people on the autism spectrum, but later the term spread to cover a number of similar conditions. Historically, the reason for the advocacy movement emerged out of a number of historic factors. Terms like "Asperger's syndrome" were originally created so that Nazi doctors could differentiate between those who should go to the gas chambers and those who could be reclaimed for some useful, albeit primitive, work. This was not confined to Nazi Germany. Forced sterilization of those with autism was not abolished in Japan until 1996. While rare, currently, there are laws in 31 U.S. states and Washington, D.C. that allow for the forced sterilization of individuals with disabilities, including those with autism. It should be pointed out that no state allows forced sterilization for the worst crimes, including rape. So, from a punishment point of view, autism is considered worse than mass murder.


Possibly with the introduction of the internet, people on the autism spectrum were able to come together as a community. Out of this, what has come to be called neurodiversity emerged.


Neurodiversity challenges what is referred to as the medical pathology of autism. There are a number of pointers towards this; firstly, if autism is such a spectacularly disabling condition, then why hasn't it quickly evolved out of the human experience? Secondly, it is very difficult to identify the neurological differences that define one brain as dysfunctional and another as neurotypical. The term "neurotypical" was also created by the neurodiverse community to describe those without a neurological difference. Sometimes the term "neuromajority" is used.


The Neurodiversity movement, deliberately modeled itself on other fights for social justice, such as feminism and queer rights. It challenges the defective model of autism. From the neurodiversity point of view, autism represents a different way of being human. It exists because there are upsides that benefit the larger neurotypical community. As such, autism, and now other conditions associated with neurodiversity, such as dyslexia and ADHD, etc., are not forms of brain damage but brain differences.


This is not to deny that brain damage does exist; for example, Parkinson's is clearly a medical pathology.


As such, neurodiversity should be regarded as a political stance similar to feminism. It should not be used as a polite collective noun for those who are neurodivergent, any more than we can refer to persons with feminism as a synonym for women. Not all women are feminists, and not all feminists are women.


Instead, the officially accepted community term is 'neurodivergent'. Being neurodivergent means having a brain that functions differently from that of the average or "neurotypical" person.


We can, and in fact, many do ignore the meanings of community terms. Academics can co-opt terms to mean anything they like; however, this form of appropriation shows gross insensitivity to the thinking and the communities from which these terms originate. We could also refer to feminist toilets if we liked, but typically we do not. Academics are largely part of the predominant neurotypical group and, given that they are the majority, have the power through greater numbers to override the original terms. This is what your paper perpetuates in the first line of the introduction.


As a neurodiverse, dyslexic academic and father of a child with autism, I can understand that from a medical pathology perspective, you might want to help children on the autism spectrum learn to collaborate.


In my experience, I have encountered many people who cannot believe that I cannot learn to read. When I was being assessed during my Ph.D., I spoke about this to a senior professor who was conducting the disability assessment. She pointed out to me that dyslexia was like losing an arm; it was never going to grow back, and no amount of work was ever going to make it right. If I had that level of neuroplasticity, she pointed out, then I would not be dyslexic. All I can do is rely on assorted associative coping strategies, just like a paraplegic uses an artificial leg.


This is the same for your subjects. They are never going to be able to process social cues as you do. I know that many well-meaning people feel that if only we try hard enough, we will be able to overcome these limits which seem so simple to them. This is not going to happen. If I lose my legs, I'm never going to be able to run like you, no matter how hard I try. I can have coping strategies. The best thing that your game is going to achieve is getting these students to develop these coping strategies early on.


They are never going to be as good as the natural abilities of a neurotypical. If they could be, then neuroplasticity alone would allow us to switch to another part of our brains. My son, like your subjects, has to devote considerable primary cognitive effort to what you do effortlessly and unconsciously. Equally, if you looked hard enough, there would be things they can do without thinking which you would find difficult. My guess is that if you go back to your subjects a year later, the improvements you report will be gone.


From a neurodiversity point of view, the central question here is one of power. You, as a neurotypical academic, have the power to force them (the neurodivergent) to conform to your way and your vision of what is human. You get to decide that turn-taking and looking people in the eye are important; they don't get a vote in the matter.


This, for me, is what makes the medical model of autism incompatible with human-computer interaction. The central message of human-computer interaction is always to put people at the center of the design process. The medical model of autism makes you prioritize the priorities of doctors and parents above the wishes of those involved themselves. This, for me, makes it less CHI-ish.


The central premise of Neurodiversity, is that, instead of forcing neuro divergent individuals to conform to the standard set by Neurotypical adults and academics, we should, instead seek to embrace diversity. We should seek ways of accommodating these differences.


It might be clearer with an analogy. As a woman, you are most likely to be earning 80% of your male equivalently educated peers. So one strategy your parents could have taken is the train you to look sound and behave like a man. You could have been forced to play games which trained you to stop acting like a woman.Then you could've passed yourself off as a man and had higher economic outcomes and a more successful life. 


The alternative feminist strategy is to change the world to one, where differences and gender are tolerated, and in fact embraced. It doesn't matter that you're a woman and so you don't need to be trained to be 'gender' typical'. You can be good at what you want. 


From a Neurodiversity point of view, perhaps you should be working on games which encourage parents to accept their Neurodivergent children for who they are, not want to train them to pretend to be normal. 


A truly neurodiverse approach would be to look at the positive factors of autism and the things the children are good at and find ways to create value for these in the wider society.  


I understand that this is not your current paradigm. The medical deficiency model is quite dominant. 95% of robot interaction and autism papers assume a 'you are defective let me fix you' model of autism. 


However, in the spirt of academic debate,  I wanted you to be aware there are alternatives. 


If you going to send me a 'I'm sorry you think that way' email then don't bother as it would just be a sign I've failed to write persuasively and informative way. 


Good luck with your future work. 




Saturday, March 30, 2024

How to think about AI if you don't know about AI

So many people worry about artificial intelligence. the central worry, for many people is if the AI can reach general levels of consciousness. I think the worry about this is the general lack of a good metaphor for understanding what we're looking at.

 Generally people don't have a good way to understand what AI is and how software production is being part of this process. In fact, there is a very simple way.  

This paper 


or this you-tube video 


Introduce the work of a group that has trained pigeons to look for and identify cancerous breast tissue.

Briefly, they place a pigeon in front of a screen. The pigeon is shown a scan of breast tissue that may be cancerous. The pigeon pecks on two buttons: one labeled "cancerous" and the other labeled "not cancerous."

If the pigeon is correct, it receives food; if not, it receives none. The pigeons quickly learn to be about 80% accurate in identifying cancerous material, which isn't quite as accurate as a human but not bad from 48 images. However, when their scores are combined, the final system is not more accurate than a clinician.

This is quite amusing, except if we substitute AI for pigeons, it becomes quite miraculous, and we should expect AI to replace human clinicians soon.

The difference between artificial intelligence and pigeons becomes apparent when working with pigeons. You realize that much of the real magic lies in how you present images to the pigeons. It is this data preparation that allows the pigeon brain or AI to process the information. By manipulating digital information correctly, we can make the job of the pigeon or AI easier or harder.

Pigeons are tetrachromats, meaning they have four types of cone cells in their retinas that allow them to see a range of colors. However, their color perception is thought to be most sensitive in the short-wavelength (blue) and medium-wavelength (green) regions of the spectrum. They may have difficulty distinguishing between certain colors, particularly those in the red-orange range. Therefore, by applying some Photoshop filters to the original images from the paper, we could make the pigeon's job easier, quicker, and more accurate. More example 

If you were a a doctor whose job it was to look at pictures of biopsies of cancerous cells all day you might wonder if your job is at risk. I think most people at this point would wonder who is looking after the pigeons, since they aren't doing the actual work.  

Well, this is similar for AI. We have data centers burning electricity and people looking after the machines in the data centers. Would a group of dedicated pigeon fanciers be any more different or expensive? Sure, we could put them into remote data centers, hide them away from the public eye, but you would still have to charge for the energy (seed) and staff time. Given how good we are at growing broiler chickens in factory farms, you do wonder which would have the economic edge Datacenter/pigeon centre. 

Most of what AI  work does involves figuring out how to present the world to pigeons or AI and getting them to press the right button for the right data.

For example, we could train pigeons to reviewcollege resumes for graduate applicants. the applications would be converted to images and then pigeons peck on a button labeled 'make offer' or 'don't make offer' based on previous data. So saving time and money. 


Could we build Pigeon ChatGPT?

There's nothing in principle stopping someone from building a pigeon version of ChatGPT. However, it might require more than one pigeon. The key area is how to present the pigeons with the text. Using something like a Word Embedding (Word2Vec) would be necessary to convert text into picture elements along with the pigeon equivalent for an attention mechanism. The key part is presenting the neural network or pigeon with clear information.

So, could our PigeonGPT achieve general consciousness? Perhaps if we had a thousand pigeons sitting together, each with different screens performing different sub-elements, would it create a 'mind'? If you're not worried about a PigeonGPT then why should you be about a neural network? 






Monday, June 20, 2022

How to make a simple animated chart in plotly with Python ( and pywebio)

data = [[1, 11], [2, 12], [3, 8], [4,14],[5,15]]

print(len(data))
whole = [ ]
for t in range(1,len(data)) :
print(t, data[t])
for k in range(0,t+1):
#print( [k] + data[k] )
whole.append( [t] + data[k] )

# Create the pandas DataFrame
df = pd.DataFrame(whole, columns=['INDX', 'N', 'Intergrated'])

fig = go.Figure( px.line(df, x='N', y='Intergrated', animation_frame='INDX') )

html = fig.to_html(include_plotlyjs="require", full_html=False)
put_html(html)




So if you want to make a simple animated chart. 

Thursday, April 1, 2021

Why dyslexics make good coders | BCS

The worst three terms in my entire academic life are coming to an end. All the video recording backed up by more face-to-face workshops and seminars is finally giving way to having some time to think again.  

In conversation about using a online crowd sourced system. Ended up with a slight tangent about why there are so many dyslexic programers and why so many students dropout from computer science.

Following up on this found BCS article by Prof. John Stein, Magdalen College, Oxford.  

He suggests it's about thinking beyond the detail ( see links below). One thing I have noticed about the weak student is there in ablity to use intentation. you get if { { ( a > 4 ) { println)). The brackets failing to lineup causes them endless problems. Perhaps Python in the works because it forces the two-dimensional notation of code to get even the most basic code written. 

Perhaps the success of Spreadsheets and Scratch programming ( with general neurotypcial population) is to do with this forcing of visualsiation. 

Stein states 
'This preponderance of parvocellular connections, which tend to be more long-range, means that dyslexic brains are much better at associating detail from widely different parts of a static visual scene than comparing across time. Thus, they are quicker and more accurate at spotting impossible static constructions such as Escher’s drawings of impossible waterfalls, Penrose’s triangles and his impossible stairs. Most of us have to move our eyes from one side of the picture to the other to spot its contradictions, but many dyslexics can see them all at once.' 

I've been using flow diagrams this term with a first-year students to overcome the problem of going from code to program. ... 

Prof. John Stein, Magdalen College, Oxford, explores how dyslexia works and explains why the condition might be a profound positive when it comes to designing software.

Why dyslexics make good coders | BCS

https://www.scientificamerican.com/article/dyslexia-can-deliver-benefits/ ( on the impossible pictures thing). 


I've been wondering if I could do some eye tracking experiments on students while they are learning to program. Perhaps if we could see what they see/look at we might get a better understanding of what we are missing. 

The fundamental problem is what the week students are missing is practice. This year particularly it's been difficult to know if the students have actually been engaging with materials. My feeling is that the students drop out are the ones who are disapointed they cannot learn programming in a few minutes. They are used to mastering something in seconds with instant feed back or giving up. 


Notes. 
Dyslecxics - 
Material reasoning 
Interconnected reasoning  ( conneting all the dot)
Narrartive Reasoning.  
Dynamic Reasoning. ( talk about things that have happend into the future). 




Sunday, September 8, 2019

Tuesday, June 18, 2019

Discovery of the day DeepGaze

Basically someone has programmed a neural network to simulate what would happen if you had an eye tracker and a group fo subjects. It was intended for mobile eye tracking in scene  ( ie saliency ) 


I tried  this page web page

Kind of works with my intuition of where people look at the page 


What do people think? Does anyone know of someone has does this for a web page. 

Seems like it would be very useful for designers - design page try it out on auto ‘eye tracking’ the redesigning to get the right things noticed. 

File under: I wish I had thought of that… 

Thursday, January 17, 2019

Back with out a bang

Stuff to read

https://www.nber.org/papers/w24174.pdf



Inequality is one of the main challenges posed by the proliferation of artificial intelligence (AI) and other forms of worker-replacing technological progress. This paper provides a taxonomy of the associated economic issues: First, we discuss the general conditions under which new technologies such as AI may lead to a Pareto improvement. Secondly, we delineate the two main channels through which inequality is affected – the surplus arising to innovators and redistributions arising from factor price changes. Third, we provide several simple economic models to describe how policy can counter these effects, even in the case of a “singularity” where machines come to dominate human labor. Under plausible conditions, non-distortionary taxation can be levied to compensate those who otherwise might lose. Fourth, we describe the two main channels through which technological progress may lead to technological unemployment – via efficiency wage effects and as a transitional phenomenon. Lastly, we speculate on how technologies to create super-human levels of intelligence may affect inequality and on how to save humanity from the Malthusian destiny that may ensue.


Wednesday, October 11, 2017

Bob Martin SOLID Principles of Object Oriented and Agile Design

This is a good explanation of why object orientation really exists. For me it talks about how we can design better future languages.





Tuesday, August 15, 2017

More positions going

Northumbria is looking for more people for the computing department.





Associate Professor in Computer Sciences 
Faculty of Engineering and Environment 
Northumbria University 
Placed on: 11-07-2017  Salary: £49,772 to £55,998 
ClosesAug20
Professor in Computer Science 
Faculty of Engineering and Environment 
Northumbria University 
Placed on: 12-07-2017  Salary: Competitive Salary 
ClosesAug20
Senior Lecturer/Lecturer (Fixed Term until 30 June 2018 – Maternity Cover) 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 12-07-2017  Salary: £33,943 to £48,327 pro rata 
ClosesAug20
Senior Lecturer/Lecturer in Computer Science 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 24-07-2017  Salary: £33,943 to £48,327 
ClosesAug20
Senior Lecturer/Lecturer in Data and Information Science 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 24-07-2017  Salary: £33,943 to £48,327 
ClosesAug20

Wednesday, June 21, 2017

Northumbria Learning and Teaching

I've just done day 1 of Northumbria Learning and Teaching. Perhaps it's the post-workshop blues but I'm really down by the whole situation.

I think I entered academia because as the first generation of students with a learning disability  I had a lousy education experience and I thought I could do something about it for the next generation. I'm sitting in a workshop where the reading materials about teaching for accessibility actually isn't' screen readable and I'm wondering how I am being coopted into this dystopia.

I always believe in the transformative power of education. I always want to see myself as giving a hand up rather than being another barrier to be overcome. I know this isn't always the way but my feeling was this is what we all generally want. The opposite view is that education is a way of promoting privilege to a particular social class.  In this case the neurotypical class.  I like to believe this isn't the general view of my fellow academics, but I've seen on the ground evidence this isn't true.

If you do training on how to interview people the answer is interviewers generally want to 'self-reproduce'.  "if only I had the time I would be the best person for the job'"  interviewers are supposed to think. So they pick someone most like themselves.   This is supposed to be the source of unwitting bias. You know - I'm white your not so why should you be any good at this job. So I'm not people are being deliberately exclusive, it's a non-deliberate outcome of other focuses.  I saw this in plain view all over the workshop but this time it's about getting the students to be more like 'us'. So one interpretation of academic assessment is a way of giving potential employers can an excuse ( qualifications ) to reject people who might just think differently from them. From this perspective, the promotion of me-ness Academic activity is at it's worst a way of demoting difference.

In the first day of the sessions, I see how this comes about. I guess it's all about the priorities, one seemingly natural priority pushes another out and somewhere in all this the few without any fault become the fewer. Not intentionally but unintentionally. I watch people becoming victims not of deliberate exclusion but of impetuous neglect.

I know the system doesn't feel I should be here. I guess this is why I overstay my welcome. I am the thin end of the wedge in the door. I am the irritant who doesn't know their place. I am the upstart. I am the vulgarian. I am the nail that sticks out and If I can't take the knocks I should return to the pool my kind came from. But I choose not to. I guess those are the greatest words every written. I will not because I choose not to.






Thursday, November 10, 2016

Apps are dying. Long live the subservient bots ready to fulfil your every desire

Some interesting info on new types of interaction processes



Apps are dying. Long live the subservient bots ready to fulfil your every desire: In October 2009, Apple launched in-app purchases for the App Store. The software industry hasn't looked back. In the second half of 2013 alone, Candy Crush Saga made $1.04 billion from microtransactions. More recently, Pok�mon GO, Niantic's runaway-success game, made $35 million from in-app purchases in two weeks. According to analysts IDC, revenue from mobile apps, not including advertising, was around $34.2 billion in 2015. For bots, the opportunity could be even greater. "Bots have emerged as a high-potential channel of distribution for mobile services," says Guo. Not only do messaging apps have a captive audience, the cost of developing bots is lower than for apps. "The progression from trivial to sophisticated is going to happen faster," says Underwood. "App developers have been able to learn from the introduction of prior interfaces because it wasn't long ago that mobile apps came on the scene. It took a few years in mobile. With bots I think it will happen in half the time." Libin, one of the bot industry's leading investors, has no doubts about its transformative potential. "There are going to be 100 million bots. It's going to be similar to the app gold-rush, but magnified," he says. As with apps, the vast majority of bots will be pointless, he argues. "But the few hundred that are actually really good are going to be world-changing."