Abe Davis: New video technology that reveals an object's hidden properties | TED Talk | TED.com
Actually a good talk about recovering subtle information ( in this case in pixels)
Tuesday, June 30, 2015
Friday, June 19, 2015
Research Blog: Inceptionism: Going Deeper into Neural Networks
Lovely blog post from Google. I never trust machine learning because you never know what they've actually learnt. Personally I've always thought that visualising machine learning would be a good way of trying to understand what machine learning is doing. Looks like some people of Google have done a great job. Not only that it looks lovely.
Research Blog: Inceptionism: Going Deeper into Neural Networks: Why is this important? Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network’s representation of a fork.
Indeed, in some cases, this reveals that the neural net isn’t quite looking for the thing we thought it was. For example, here’s what one neural net we designed thought dumbbells looked like:
Tuesday, June 16, 2015
Advanced Chess | Smarter Than You Think
Excerpt | Smarter Than You Think
This is a fantastic example of what happens when humans and machines work together. I have said on numerous occasions that there are two essential streams of thought in computing. The first thinks of computers as replacing human beings. This was very well articulated by Alan Turing. This I call the artificial intelligence myth. The second thinks of computers as augmenting human intellect. This is the very much based on the thinking of Douglas Engelbert.
Anyway I found this except from the book Smarter Than You Think: How Technology is Changing Our Minds for the Better
by Clive Thompson talking about 'advanced chess' or working like a centur, I always felt that this was the ultimate example of what augmentation look like.
Naturally as someone who is passionate about human computer interaction this seems like the ultimate smart way of using digital technology.
This is a fantastic example of what happens when humans and machines work together. I have said on numerous occasions that there are two essential streams of thought in computing. The first thinks of computers as replacing human beings. This was very well articulated by Alan Turing. This I call the artificial intelligence myth. The second thinks of computers as augmenting human intellect. This is the very much based on the thinking of Douglas Engelbert.
Anyway I found this except from the book Smarter Than You Think: How Technology is Changing Our Minds for the Better
by Clive Thompson talking about 'advanced chess' or working like a centur, I always felt that this was the ultimate example of what augmentation look like.
Naturally as someone who is passionate about human computer interaction this seems like the ultimate smart way of using digital technology.
Wednesday, June 10, 2015
PERDIS
Fellow PERDIS 2015 people
This is the building on campus where PERDIS will be held.
Unlike google maps I found bus 112 from the train station to be fast, bendy and take about 15min not bad for €2.5
I welcome our new robot master overlords.
You may well have read about a Stephen Hawkins and others fearing the rise of artificial intelligence leading to a robot apocalypse.
One thing which people outside the computing community forget is that our future overlords will have to do things like stand-up, open doors, you know stuff. They tend to forget that if you can do complex calculations in a millisecond then doing things we don't find tricky arn't a problem. In fact if computing & AI (artifical intelligence) tells is one thing, it tells us that this isn't true. This underlies the real truth of computing, that we are creating a different kind of intelligence.
The stuff we tend to take for granted - common sense, standing up, getting out of a car unaided is actually incredibly complex and tricky. We should never underestimate our abilities in this area. As soon as tasks become circumscribed then machines start to gain the advantage. We tend to underplay our ability to be flexible and general yet in many respects it is the thing which machines at least able to do.
So while we might fear machine is taking over, it's less likely than we expect. Artificial intelligence has always seen computing as the 'mind offspring' of people. From Alan Turing onwards and in the media such as Transendance, A.I. (Artifical Intellgience), 2001: A space Odyssey, Big Hero 6, Chappie, Westworld, Blade Runner, Her, Ex Machina we have always seen machine as 'synthetic person'. This is good for Hollywood - it means writers can ask questions about what it means to be a person but nothing guarantees this as an accurate model for how digital intelligence may evolve. Where computing has worked is when people and machines do what they do best.
Human computer interaction is very a much the brainchild of the late Douglas Engelbart. His vision of computing sees computing as augmenting human intellect. In effect we do what we do best, machines do what they do best. This collaboration creates a synergy. This slightly the utopian vision of computing of course doesn't generate any sexy plot problems which would appeal to Hollywood writers which is why we don't see much about it in films and TV. Yet it does seem to represent more about our lived experience of computing.
Rodney Brooks said at the robot event: "Anyone who is worried about the robot apocalypse just needs to keep their doors closed."
Ultimately our inability to realise machines difficulties handling what we regard as simple things means we greatly underestimated the problems of automating particular activities. For driving for example while it is possible to get a large number of miles with an smart cruise control driving in the city with other drivers around becomes very difficult. Google car will have to achieve something like the Turing test for driving before it can successfully replace drivers 100% of the time. Perhaps Google car is another Google glass ( over hyped ). While I have no doubt a digital car could drive itself in other digital traffic ( it's been happening at the heathrow Terminal 5 pods for a few years now), could simple but super fast reactions make up for sophisticated recognition in anticipation. For example if you're being tailgated by a heavy truck and THEN a dog runs out in front of you, clearly your own (and your passengers') safety takes precedence and the dog loses out, but if it's a child runs out you would naturally do something diffrent. While we can recognise a child vs a dog that ablity won't come to google car soon.
this says it all
One thing which people outside the computing community forget is that our future overlords will have to do things like stand-up, open doors, you know stuff. They tend to forget that if you can do complex calculations in a millisecond then doing things we don't find tricky arn't a problem. In fact if computing & AI (artifical intelligence) tells is one thing, it tells us that this isn't true. This underlies the real truth of computing, that we are creating a different kind of intelligence.
The stuff we tend to take for granted - common sense, standing up, getting out of a car unaided is actually incredibly complex and tricky. We should never underestimate our abilities in this area. As soon as tasks become circumscribed then machines start to gain the advantage. We tend to underplay our ability to be flexible and general yet in many respects it is the thing which machines at least able to do.
So while we might fear machine is taking over, it's less likely than we expect. Artificial intelligence has always seen computing as the 'mind offspring' of people. From Alan Turing onwards and in the media such as Transendance, A.I. (Artifical Intellgience), 2001: A space Odyssey, Big Hero 6, Chappie, Westworld, Blade Runner, Her, Ex Machina we have always seen machine as 'synthetic person'. This is good for Hollywood - it means writers can ask questions about what it means to be a person but nothing guarantees this as an accurate model for how digital intelligence may evolve. Where computing has worked is when people and machines do what they do best.
Human computer interaction is very a much the brainchild of the late Douglas Engelbart. His vision of computing sees computing as augmenting human intellect. In effect we do what we do best, machines do what they do best. This collaboration creates a synergy. This slightly the utopian vision of computing of course doesn't generate any sexy plot problems which would appeal to Hollywood writers which is why we don't see much about it in films and TV. Yet it does seem to represent more about our lived experience of computing.
Rodney Brooks said at the robot event: "Anyone who is worried about the robot apocalypse just needs to keep their doors closed."
Ultimately our inability to realise machines difficulties handling what we regard as simple things means we greatly underestimated the problems of automating particular activities. For driving for example while it is possible to get a large number of miles with an smart cruise control driving in the city with other drivers around becomes very difficult. Google car will have to achieve something like the Turing test for driving before it can successfully replace drivers 100% of the time. Perhaps Google car is another Google glass ( over hyped ). While I have no doubt a digital car could drive itself in other digital traffic ( it's been happening at the heathrow Terminal 5 pods for a few years now), could simple but super fast reactions make up for sophisticated recognition in anticipation. For example if you're being tailgated by a heavy truck and THEN a dog runs out in front of you, clearly your own (and your passengers') safety takes precedence and the dog loses out, but if it's a child runs out you would naturally do something diffrent. While we can recognise a child vs a dog that ablity won't come to google car soon.
this says it all
Thursday, June 4, 2015
Smelly Maps
Smelly Maps
four researchers - Daniele Quercia, Luca Maria Aiello, Rossano Schifanella, and Kate McLean - have recently proposed a new way of capturing the entire urban smellscape from social media data. They run smell walks in seven cities in UK, Europe, and USA and, in so doing, collected smell-related words. Locals were asked to walk around their city, identify distinct odors, and take notes. Smell descriptors were taken verbatim from the smell walkers’ original hand-written notes.
The researchers then matched the smell related words with social media data (tags on Flickr pictures and tweets) for the cities of London and Barcelona. To structure this large and apparently unrelated dataset of smell words, they built a co-occurrence network where nodes are smell words and undirected edges are weighted with the number of times the two words co-occur in the same items. The result of this process is the first urban smell dictionary containing 285 English terms.
four researchers - Daniele Quercia, Luca Maria Aiello, Rossano Schifanella, and Kate McLean - have recently proposed a new way of capturing the entire urban smellscape from social media data. They run smell walks in seven cities in UK, Europe, and USA and, in so doing, collected smell-related words. Locals were asked to walk around their city, identify distinct odors, and take notes. Smell descriptors were taken verbatim from the smell walkers’ original hand-written notes.
The researchers then matched the smell related words with social media data (tags on Flickr pictures and tweets) for the cities of London and Barcelona. To structure this large and apparently unrelated dataset of smell words, they built a co-occurrence network where nodes are smell words and undirected edges are weighted with the number of times the two words co-occur in the same items. The result of this process is the first urban smell dictionary containing 285 English terms.
Term of the day - print impairment.
Print-disability definitions
Persons who are blind or who have an impairment of visual functionwhich cannot be improved, by the use of corrective lenses, to a level that would normally be acceptable for reading without a special level or kind of light;Persons who are unable to, through physical disability, hold or manipulate a book, or who are unable to focus or move their eyes to the extent that would normally be acceptable for reading; Persons who are unable to effectively read print due to dyslexia or other cognitive learning disabilities such as Autism or Attention Deficit Disorder (ADD); Persons who are unable to effectively read print due to a state of arrested or incomplete development of mind, which includes any significant impairment of intelligence and/or social functioning.
Persons who are blind or who have an impairment of visual functionwhich cannot be improved, by the use of corrective lenses, to a level that would normally be acceptable for reading without a special level or kind of light;Persons who are unable to, through physical disability, hold or manipulate a book, or who are unable to focus or move their eyes to the extent that would normally be acceptable for reading; Persons who are unable to effectively read print due to dyslexia or other cognitive learning disabilities such as Autism or Attention Deficit Disorder (ADD); Persons who are unable to effectively read print due to a state of arrested or incomplete development of mind, which includes any significant impairment of intelligence and/or social functioning.
The Nightscout Project | We Are Not Waiting
The Nightscout Project | We Are Not Waiting: What is the Nightscout project?
Nightscout (CGM in the Cloud) is an open source, DIY project that allows real time access to a Dexcom G4 CGM from web browsers via smartphones, computers, tablets, and the Pebble smartwatch. The goal of the project is to allow remote monitoring of the T1D’s glucose level using existing monitoring devices.
Nightscout (CGM in the Cloud) is an open source, DIY project that allows real time access to a Dexcom G4 CGM from web browsers via smartphones, computers, tablets, and the Pebble smartwatch. The goal of the project is to allow remote monitoring of the T1D’s glucose level using existing monitoring devices.
Tuesday, June 2, 2015
Milton Keynes trials city-wide public 'internet of things' network | Technology | The Guardian
Milton Keynes trials city-wide public 'internet of things' network | Technology | The Guardian:
The comments from residents make Milton Keyens a drab, dreary and under invested ( broadband wise) area. You do wonder if things are as bad as they claim? If so why not move? Perhaps it's really nice and they are trying to keep people knowing.
Struck byComment by
DunkirkPi BarrieJ
" It was only a couple of years ago they were spending more money defending claims from people who had suffered injuries from poorly maintained pavements than they were spending repairing them.
Doesn't surprise me, the pavements are absolutely lethal. Why don't they fix them? If a paving slab protrudes by an two inches then it's only a matter of time before somebody trips over it."
Sounds like the potential for an internet of things robot.
I was for some reason caught by the rat traps with sensors.
The comments from residents make Milton Keyens a drab, dreary and under invested ( broadband wise) area. You do wonder if things are as bad as they claim? If so why not move? Perhaps it's really nice and they are trying to keep people knowing.
Struck byComment by
DunkirkPi BarrieJ
" It was only a couple of years ago they were spending more money defending claims from people who had suffered injuries from poorly maintained pavements than they were spending repairing them.
Doesn't surprise me, the pavements are absolutely lethal. Why don't they fix them? If a paving slab protrudes by an two inches then it's only a matter of time before somebody trips over it."
Sounds like the potential for an internet of things robot.
I was for some reason caught by the rat traps with sensors.
Subscribe to:
Posts (Atom)