Saturday, March 30, 2024

How to think about AI if you don't know about AI

So many people worry about artificial intelligence. the central worry, for many people is if the AI can reach general levels of consciousness. I think the worry about this is the general lack of a good metaphor for understanding what we're looking at.

 Generally people don't have a good way to understand what AI is and how software production is being part of this process. In fact, there is a very simple way.  

This paper 

or this you-tube video 

Introduce the work of a group that has trained pigeons to look for and identify cancerous breast tissue.

Briefly, they place a pigeon in front of a screen. The pigeon is shown a scan of breast tissue that may be cancerous. The pigeon pecks on two buttons: one labeled "cancerous" and the other labeled "not cancerous."

If the pigeon is correct, it receives food; if not, it receives none. The pigeons quickly learn to be about 80% accurate in identifying cancerous material, which isn't quite as accurate as a human but not bad from 48 images. However, when their scores are combined, the final system is not more accurate than a clinician.

This is quite amusing, except if we substitute AI for pigeons, it becomes quite miraculous, and we should expect AI to replace human clinicians soon.

The difference between artificial intelligence and pigeons becomes apparent when working with pigeons. You realize that much of the real magic lies in how you present images to the pigeons. It is this data preparation that allows the pigeon brain or AI to process the information. By manipulating digital information correctly, we can make the job of the pigeon or AI easier or harder.

Pigeons are tetrachromats, meaning they have four types of cone cells in their retinas that allow them to see a range of colors. However, their color perception is thought to be most sensitive in the short-wavelength (blue) and medium-wavelength (green) regions of the spectrum. They may have difficulty distinguishing between certain colors, particularly those in the red-orange range. Therefore, by applying some Photoshop filters to the original images from the paper, we could make the pigeon's job easier, quicker, and more accurate. More example 

If you were a a doctor whose job it was to look at pictures of biopsies of cancerous cells all day you might wonder if your job is at risk. I think most people at this point would wonder who is looking after the pigeons, since they aren't doing the actual work.  

Well, this is similar for AI. We have data centers burning electricity and people looking after the machines in the data centers. Would a group of dedicated pigeon fanciers be any more different or expensive? Sure, we could put them into remote data centers, hide them away from the public eye, but you would still have to charge for the energy (seed) and staff time. Given how good we are at growing broiler chickens in factory farms, you do wonder which would have the economic edge Datacenter/pigeon centre. 

Most of what AI  work does involves figuring out how to present the world to pigeons or AI and getting them to press the right button for the right data.

For example, we could train pigeons to reviewcollege resumes for graduate applicants. the applications would be converted to images and then pigeons peck on a button labeled 'make offer' or 'don't make offer' based on previous data. So saving time and money. 

Could we build Pigeon ChatGPT?

There's nothing in principle stopping someone from building a pigeon version of ChatGPT. However, it might require more than one pigeon. The key area is how to present the pigeons with the text. Using something like a Word Embedding (Word2Vec) would be necessary to convert text into picture elements along with the pigeon equivalent for an attention mechanism. The key part is presenting the neural network or pigeon with clear information.

So, could our PigeonGPT achieve general consciousness? Perhaps if we had a thousand pigeons sitting together, each with different screens performing different sub-elements, would it create a 'mind'? If you're not worried about a PigeonGPT then why should you be about a neural network? 

Monday, June 20, 2022

How to make a simple animated chart in plotly with Python ( and pywebio)

data = [[1, 11], [2, 12], [3, 8], [4,14],[5,15]]

whole = [ ]
for t in range(1,len(data)) :
print(t, data[t])
for k in range(0,t+1):
#print( [k] + data[k] )
whole.append( [t] + data[k] )

# Create the pandas DataFrame
df = pd.DataFrame(whole, columns=['INDX', 'N', 'Intergrated'])

fig = go.Figure( px.line(df, x='N', y='Intergrated', animation_frame='INDX') )

html = fig.to_html(include_plotlyjs="require", full_html=False)

So if you want to make a simple animated chart. 

Thursday, April 1, 2021

Why dyslexics make good coders | BCS

The worst three terms in my entire academic life are coming to an end. All the video recording backed up by more face-to-face workshops and seminars is finally giving way to having some time to think again.  

In conversation about using a online crowd sourced system. Ended up with a slight tangent about why there are so many dyslexic programers and why so many students dropout from computer science.

Following up on this found BCS article by Prof. John Stein, Magdalen College, Oxford.  

He suggests it's about thinking beyond the detail ( see links below). One thing I have noticed about the weak student is there in ablity to use intentation. you get if { { ( a > 4 ) { println)). The brackets failing to lineup causes them endless problems. Perhaps Python in the works because it forces the two-dimensional notation of code to get even the most basic code written. 

Perhaps the success of Spreadsheets and Scratch programming ( with general neurotypcial population) is to do with this forcing of visualsiation. 

Stein states 
'This preponderance of parvocellular connections, which tend to be more long-range, means that dyslexic brains are much better at associating detail from widely different parts of a static visual scene than comparing across time. Thus, they are quicker and more accurate at spotting impossible static constructions such as Escher’s drawings of impossible waterfalls, Penrose’s triangles and his impossible stairs. Most of us have to move our eyes from one side of the picture to the other to spot its contradictions, but many dyslexics can see them all at once.' 

I've been using flow diagrams this term with a first-year students to overcome the problem of going from code to program. ... 

Prof. John Stein, Magdalen College, Oxford, explores how dyslexia works and explains why the condition might be a profound positive when it comes to designing software.

Why dyslexics make good coders | BCS ( on the impossible pictures thing). 

I've been wondering if I could do some eye tracking experiments on students while they are learning to program. Perhaps if we could see what they see/look at we might get a better understanding of what we are missing. 

The fundamental problem is what the week students are missing is practice. This year particularly it's been difficult to know if the students have actually been engaging with materials. My feeling is that the students drop out are the ones who are disapointed they cannot learn programming in a few minutes. They are used to mastering something in seconds with instant feed back or giving up. 

Dyslecxics - 
Material reasoning 
Interconnected reasoning  ( conneting all the dot)
Narrartive Reasoning.  
Dynamic Reasoning. ( talk about things that have happend into the future). 

Sunday, September 8, 2019

Tuesday, June 18, 2019

Discovery of the day DeepGaze

Basically someone has programmed a neural network to simulate what would happen if you had an eye tracker and a group fo subjects. It was intended for mobile eye tracking in scene  ( ie saliency ) 

I tried  this page web page

Kind of works with my intuition of where people look at the page 

What do people think? Does anyone know of someone has does this for a web page. 

Seems like it would be very useful for designers - design page try it out on auto ‘eye tracking’ the redesigning to get the right things noticed. 

File under: I wish I had thought of that… 

Thursday, January 17, 2019

Back with out a bang

Stuff to read

Inequality is one of the main challenges posed by the proliferation of artificial intelligence (AI) and other forms of worker-replacing technological progress. This paper provides a taxonomy of the associated economic issues: First, we discuss the general conditions under which new technologies such as AI may lead to a Pareto improvement. Secondly, we delineate the two main channels through which inequality is affected – the surplus arising to innovators and redistributions arising from factor price changes. Third, we provide several simple economic models to describe how policy can counter these effects, even in the case of a “singularity” where machines come to dominate human labor. Under plausible conditions, non-distortionary taxation can be levied to compensate those who otherwise might lose. Fourth, we describe the two main channels through which technological progress may lead to technological unemployment – via efficiency wage effects and as a transitional phenomenon. Lastly, we speculate on how technologies to create super-human levels of intelligence may affect inequality and on how to save humanity from the Malthusian destiny that may ensue.

Wednesday, October 11, 2017

Bob Martin SOLID Principles of Object Oriented and Agile Design

This is a good explanation of why object orientation really exists. For me it talks about how we can design better future languages.

Tuesday, August 15, 2017

More positions going

Northumbria is looking for more people for the computing department.

Associate Professor in Computer Sciences 
Faculty of Engineering and Environment 
Northumbria University 
Placed on: 11-07-2017  Salary: £49,772 to £55,998 
Professor in Computer Science 
Faculty of Engineering and Environment 
Northumbria University 
Placed on: 12-07-2017  Salary: Competitive Salary 
Senior Lecturer/Lecturer (Fixed Term until 30 June 2018 – Maternity Cover) 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 12-07-2017  Salary: £33,943 to £48,327 pro rata 
Senior Lecturer/Lecturer in Computer Science 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 24-07-2017  Salary: £33,943 to £48,327 
Senior Lecturer/Lecturer in Data and Information Science 
Department of Computer and Information Sciences 
Northumbria University 
Placed on: 24-07-2017  Salary: £33,943 to £48,327 

Wednesday, June 21, 2017

Northumbria Learning and Teaching

I've just done day 1 of Northumbria Learning and Teaching. Perhaps it's the post-workshop blues but I'm really down by the whole situation.

I think I entered academia because as the first generation of students with a learning disability  I had a lousy education experience and I thought I could do something about it for the next generation. I'm sitting in a workshop where the reading materials about teaching for accessibility actually isn't' screen readable and I'm wondering how I am being coopted into this dystopia.

I always believe in the transformative power of education. I always want to see myself as giving a hand up rather than being another barrier to be overcome. I know this isn't always the way but my feeling was this is what we all generally want. The opposite view is that education is a way of promoting privilege to a particular social class.  In this case the neurotypical class.  I like to believe this isn't the general view of my fellow academics, but I've seen on the ground evidence this isn't true.

If you do training on how to interview people the answer is interviewers generally want to 'self-reproduce'.  "if only I had the time I would be the best person for the job'"  interviewers are supposed to think. So they pick someone most like themselves.   This is supposed to be the source of unwitting bias. You know - I'm white your not so why should you be any good at this job. So I'm not people are being deliberately exclusive, it's a non-deliberate outcome of other focuses.  I saw this in plain view all over the workshop but this time it's about getting the students to be more like 'us'. So one interpretation of academic assessment is a way of giving potential employers can an excuse ( qualifications ) to reject people who might just think differently from them. From this perspective, the promotion of me-ness Academic activity is at it's worst a way of demoting difference.

In the first day of the sessions, I see how this comes about. I guess it's all about the priorities, one seemingly natural priority pushes another out and somewhere in all this the few without any fault become the fewer. Not intentionally but unintentionally. I watch people becoming victims not of deliberate exclusion but of impetuous neglect.

I know the system doesn't feel I should be here. I guess this is why I overstay my welcome. I am the thin end of the wedge in the door. I am the irritant who doesn't know their place. I am the upstart. I am the vulgarian. I am the nail that sticks out and If I can't take the knocks I should return to the pool my kind came from. But I choose not to. I guess those are the greatest words every written. I will not because I choose not to.

Thursday, November 10, 2016

Apps are dying. Long live the subservient bots ready to fulfil your every desire

Some interesting info on new types of interaction processes

Apps are dying. Long live the subservient bots ready to fulfil your every desire: In October 2009, Apple launched in-app purchases for the App Store. The software industry hasn't looked back. In the second half of 2013 alone, Candy Crush Saga made $1.04 billion from microtransactions. More recently, Pok�mon GO, Niantic's runaway-success game, made $35 million from in-app purchases in two weeks. According to analysts IDC, revenue from mobile apps, not including advertising, was around $34.2 billion in 2015. For bots, the opportunity could be even greater. "Bots have emerged as a high-potential channel of distribution for mobile services," says Guo. Not only do messaging apps have a captive audience, the cost of developing bots is lower than for apps. "The progression from trivial to sophisticated is going to happen faster," says Underwood. "App developers have been able to learn from the introduction of prior interfaces because it wasn't long ago that mobile apps came on the scene. It took a few years in mobile. With bots I think it will happen in half the time." Libin, one of the bot industry's leading investors, has no doubts about its transformative potential. "There are going to be 100 million bots. It's going to be similar to the app gold-rush, but magnified," he says. As with apps, the vast majority of bots will be pointless, he argues. "But the few hundred that are actually really good are going to be world-changing."