GridSearchCV time estimation

Here some python code which can be used on a jupyter notebook, to estimate time required for a non parralel GridSearchCV.
Its handy for calculating remaining time.
As with no paralel usage, it matches the combinations to irritate as in verbose log output.


learn_rate = [0.001, 0.01, 0.1]
dropout_rate = [0.0, 0.2, 0.1]
neuron1 = [4,8,16]
neuron2 = [2,4,8]
activation =['softmax','relu','tanh','linear']
init = ['uniform','normal','zero']

#make dictionary of the grid search parameters
#param_grid = dict(batch_size=batch_size,epochs=epochs)
param_grid =dict(learn_rate = learn_rate,
                 dropout_rate = dropout_rate,
                 neuron1 = neuron1,
                 neuron2 = neuron2,
                 activation = activation,
                 init = init)
print("Calculation time estimation for the param_grid combinations of :")
print()
import pprint
pprint.pprint(param_grid)
print()
import itertools as it
allNames = sorted(param_grid)
combinations = it.product(*(param_grid[Name] for Name in allNames))

mminutes = 2.2 # estimation of time in minutes per combination of param_grid
print ("based upon estimated time per paramgrid combination of : ",mminutes)
time_estimation = len(list(combinations))*mminutes
combinations = it.product(*(param_grid[Name] for Name in allNames))
print("the total time required would be",time_estimation,"in minutes")
print()
iterate=0
print("count time remaining combination")
for combo in combinations:
    iterate=iterate+1
    minutes_remaining = int((time_estimation- iterate*mminutes))

    time_display = '{:02d}:{:02d}'.format(*divmod(minutes_remaining, 60))
    print('{num:04d}'.format(num=iterate),time_display,"  [CV]",combo)

<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

output of above code :

Calculation time estimation for the param_grid combinations of :

{'activation': ['softmax', 'relu', 'tanh', 'linear'],
 'dropout_rate': [0.0, 0.2, 0.1],
 'init': ['uniform', 'normal', 'zero'],
 'learn_rate': [0.001, 0.01, 0.1],
 'neuron1': [4, 8, 16],
 'neuron2': [2, 4, 8]}

based upon estimated time per paramgrid combination of :  2.2
the total time required would be 2138.4 in minutes

count time remaining combination
0001 35:36   [CV] ('softmax', 0.0, 'uniform', 0.001, 4, 2)
0002 35:34   [CV] ('softmax', 0.0, 'uniform', 0.001, 4, 4)
0003 35:31   [CV] ('softmax', 0.0, 'uniform', 0.001, 4, 8)
0004 35:29   [CV] ('softmax', 0.0, 'uniform', 0.001, 8, 2)
0005 35:27   [CV] ('softmax', 0.0, 'uniform', 0.001, 8, 4)
0006 35:25   [CV] ('softmax', 0.0, 'uniform', 0.001, 8, 8)
0007 35:23   [CV] ('softmax', 0.0, 'uniform', 0.001, 16, 2)
0008 35:20   [CV] ('softmax', 0.0, 'uniform', 0.001, 16, 4)
0009 35:18   [CV] ('softmax', 0.0, 'uniform', 0.001, 16, 8)
....
..
.
Advertenties

AI level scale, to super AI and beyond.

What would be considered a Super AI, a strong AI or a normal AI ?.
The differences would be the scope of their domain, and so let me sum a scale of several AI levels AI. And push the scale upward to levels where we currently are not yet.
Currently we are about level 4 ~5 (2018), but the levels are not that strict.
I’ll sum some examples of what the levels can be, but more is possible.

Level 1 AI, rudimentary

There  are complex problems solvable by simple normal AI logic.
The most simple is probably our home’s thermostat, its backward feed loop. continuously correcting from previous states. it can have various inputs (various thermometers trough the home), if they have  little bit more logic they can find an optimal economical balance for the house temperatures. We call them smart thermostats, perhaps communicating with you mobile phone as well, they are ‘lightweight’ neural networks. Based upon simple electronics

Level 2 AI, basic level

A level 2 can solve simple problems, problems that a human can solve as well.
In layman terms a normal AI is nothing more then lots of thermostats acting together to get more advanced triggering and switching behaviour, so that such an AI  can decide if a picture is a cat or dog. Essentially its just an advanced switch, taking pictures (thousands of pixels) as input and text as output.Other examples of this are detections of tumours based on medical imaging.

Another ‘normal AI level 2′, are algorithms that can learn to predict time series, essentially information in the past has an effect on predictions for the future.
Examples of this are predicting airplane-traffic throughout the month’s.
Predicting effect of medication on patient conditions, (diabetics / cancer cures/etc)/
Predicting stock-trades (an AI can understand how all trades affect each other), and predict based upon that.

Other AI examples in level 2 are combinations of those problem domains, ea software that can predict the next consumer good a customer might be buy.

Another level example 2 an AI learning to put voice to text and visa versa.
It doesn’t understand the text itself.

Level 3 AI, a strong specific one

A strong AI, is an AI that outperforms a humans in complex domains, by huge degree.
Examples of this are computer programs of grandmaster chess level.
A computer identifying better then a doctor that something is a tumour.
An AI program that can engineer more optimal constructions then we ever thought off (in airplane engineering, and various engineering fields).

level 3 is also GAN (General Artificial Neural networks) , or  General AI, networks that can be learned any kind of computer game, without knowing the rules on start, and without being told how to play whats is good or bad. such AI’s want to play because it wont like to get in bored situation. Such a GAN might be active in real world, ea military fighter drones.
A Gan might understand (slightly) text meaning in its game-world.
A Gan able to drive a car in the real world.

Level 4, domain specific stronger then humans.

The AI level where multiple GAN’s accomplish realword tasks beyond their individual skills. Currently we see this in nature, small Ants are as collective capable of solving problems a single Ant cannot. (making bridges etc). Closely related to level 4 is basic communication between individuals. Another example is a GAN who can write a summery of text, not fully understanding the text but enough to summarize it, or to rate it.
A Gan able to run a transportation business of its own self driving car fleet, finding economical routes, though people would be required to assist in his bussiness.

Level 5, near human level, wider range of domains

Advanced communication in their own language or in human language understanding cause and effects, knowing what is talked about a good understanding of text, we get in the area of human level intelligence. Software trained to do certain jobs that are for a great part text knowledge based, like lawyers, teachers etc. Around level 5 we might require different electronics, specific electronics optimized for neural networks. A robotic car repair shop, no humans required.

Level 6, outperforming our jobs.

Human level intelligence, a robotic AI friend (or foo), that interacts with the real world and can do (most) of humans jobs better then humans can.

Level 7, organisation thinking level.

Super AI, a single AI that thinks on a level of multiple people working together.
A robot that would learn multiple disciplines (metallurgy, engineering, aerodynamics, waging economic effects ) to build a better cheaper transportation system. Or builds complex projects like underwater trains inside oceans. Research AI’s curing many diseases.

Other level 7 example, a single Artificial system that could do all the work of a city hall, including police / judge / medical / mentalcare. And finds a way of optimal sociaty.

Level 8 super AI.

Its like 7 but on a larger scale its an expansion, beyond maintaining an optimal society it would try to grow in scale, eventually after ruling a town hall, it would rule country a level 8 could explore the moon. A level 8 might also want to merge with the human mind (or we might want that ourselves) to better server our expansion goals. Unlike a level 7 a level 8 will need to grow in mind as well and eventually it will turn itself in level 9.

Level 9 Super strong AI, rewritting our existance.

Expansion and colonization of other star systems, at this level we  all might be part of the AI and live as a black goo swimming around stars, we be immortal. Humans and AI are becoming a super strong collective mind, or a single mind, most likely we are a structured mind with various levels of attentions ea to maintain ourselves our environment, able to defend or attack, able to split off. There are no boundaries between biological matter and artificial parts. We don’t require cells to live, but for colonization probably seed ourselves biologically.

Level 10, playing with physical existance.

As by nothe mikyway is explord, and we might harvest the computational power of blackholes, direct interaction wih information in the universe as a universe defined by the holographic principle. we or it trancedence from matter.
Investigating our universe and protecting it, maybe making new ones.
We can be invisible but encode ourselves in the fabrics of space time itself.
We might play for god or find some other amusing stuff to do.

 

Data vallidation for Neural networks.

So your working on a neural network, train it and use it. Usually you train on 70% or 80% and validate by 30% or 20%, and you say that it works well according to your validation set.

But if you want to do real world stuff with a neural network, there’s scary thought.
How sure are you that your training and test data, still match the real world.

Maybe a training was biased, but how can you detect that.
Supose you have to test something daily howcan you be assured that this day is just like any other day ?. Or is todays realworld data somehow different, and maybe something the neural net was not trained for.

The answer to check if your data set is significant different as compared to past data sets, is statistics, and you’l find an answer to this question in the math function that goes by a beatifull name “Ananova”.

alternative neural network

Its been a while i reported about this, but since i got some reactions to it.

Current status :
While the solver works quick for small networks, much faster then a traditional neural network. Its truly blazing fast how this new type of network can operate. Even on hardware which one normally wouldn’t use for neural networks, it works extreme fast.

With a much lower memory footprint, and much lower CPU cost. Less instructions and all int based math; that’s where most CPU’s excel in. Maybe in a later stadium it can even be ported to GPU (i don’t see limitations there), though I’m more interested currently to have a IOT device be able to do the new “binary neural net” math.

So then what’s the problem, whats holding me from release. When the network gets larger training the new binary neural network, time increases exponentially, hence the difference of both the types.

Where in traditional neural network nodes slowly get near, and biased to a pattern to act upon. The binary neural network does not emerge, neither can it easily recover from broken nodes. Its faster but fragile, and takes a lot more time to calculate for the first time its more complex to find the solution, despite that it can do with less samples, and once it is trained its extreme fast.

So currently i’m hitting limits, with larger node networks, and those have to get resolved first, the calculation time rapidly goes up as complexity increases, as the problem is calculated as whole. it usually takes me a lot time to dive into algorithm research, but it was essentially some breakthrough in this area that got me here first. So i will solve it but this might take quite some time (saying that as a programmer, it means lots to research).

So please have some patience.

On a side note, the old website (and all references to it) with  the live demo has been removed, as  i don’t want to bring it up again, as  i don’t want people to re-verse engineer it, and run away with all my hard work in this for the last couple of years. Its a huge breakthrough like someone inventing free energy, so such discoveries take time to get discovered, and i don’t like the idea someone stealing my ideas and getting rich at Microsoft, or Apple. So just you wait…

The binary neural network.

There is a tendency in neural-network designs that the larger it is, the more it can do. Sure our own brains are pretty large and we can do a lot with them. And simulating our brain is a goals in neural network development. However this doesn’t mean our virtual brains would be the best model to solve our programming tasks.

There are various ideas on how a neural network can be made using software. But its often not possible to say, to do voice recognition you need exactly x virtual neurons. In general there are input and output neurons, with a few layers of hidden neurons in between. Without going deeply into math here, they kinda work like this:

Various water springs go down a mountain, the rain forms the input. And nature shapes the river to the final output the sea. In a way rivers carve the best route’s to the sea. And thus its adapting to solve the problem of water displacement. The brain and neural networks work much the same but their routes can adapt much quicker.

To achieve the calculations required to emulate a neuron, its always done in Dword values.
They are large (in memory size) and slow to calculate with. Microprocessors and CPU’s love to work with bytes and integers. But they can only store whole numbers, and neural networks don’t work with whole numbers.

How about using different binary math ?

Usually students get to solve the XOR problem to understand why neural networks require a hidden layer, without a hidden layer the XOR problem cannot be solved by using 2 input nodes and 1 output node. Adding a hidden layer makes the calculation possible.
If your a student take a look at this video

Note though that this calculation is not easy, in terms of cpu calculations. It would be far easier to simply use bitwise XOR, then using a neural network to do XOR. Now students and people in this don’t usually comment about it, and soon start working with larger networks more node’s and forget about the XOR problem.

Changing math and how we think about neural networks.

Essential backwards propagation networks are a way of solving a neural problem. Usually based upon fuzzy alike input make decisions that are definite. To find the best flow of information to the output. the best flow or solution to this math problem is calculated for every node and all its connected nodes. Which works but it could be done different. To solve XOR we could use XOR command itself, but you’d probaply say that’s not the same as a neural network; your true. But your forgetting that the network eventually evolve to be able to act as a XOR operation  while it also can be trained to become  AND NAND or NOT or OR etc. A best solution though would be that the system should learn what kind of math to use. then not emulate it, but become it, this is something that neural networks not aim for. Granted that their problems are usually more complex then solving XOR, its just to make clear that it is “a” solution, and more solutions do exist to a problem.

eureka-hi
A rigorous thought….

A small pattern recognition system can detect if something behaves as those binary systems as well. A more advanced system could connect several smaller such systems and be competitive to a larger neural network. I’m not going into details yet, as the code is pretty draft not even BETA quality, but its a rigorous thought and something that might draw some attention in the future, as the system is low on CPU demands and seams to work fine on a small Arduino and Rasberi PI, its not hardware demanding.

>to write more details about the code later and the new theory behind it

Morros Y Christianos

Moros y Christianos (6 a 8 personen)

in een grote kookpan
500 gram zwarte bonen (gedroogd) 8 uur weken in water
na het weken de pan 2 maal gespoeld met schoon water

(PS vietnamese winkel voor zwarte bonen koste met de peper maar 1,60 E )

De zwarte bonen vervolgens samen met de 3 kopjes rijst koken 20 minuten (ongeveer 6 kopjes water). Als het goed is kan je het droogkoken zo.. maar let even op of er teveel / te weinig water in zit. laten koken tot het droog is, en anders afgieten. Deksel erop blijft warm.
Vervolgens na het koken blikje blokjes annanas erbij (niet het water)

Dan in een andere pan ongeveer 15 minuten
Bakken in olie : (olijf / zonnebloem) :
3 uien middelgroot glazig koken, en dan daar de rest bij
1 groene paprika
1 rode paprika
1 grote rode spaanse peper fijn hakken
(bij peper denk om je handen, niet aanraken en dan later in je ogen wrijven…)
4 teentjes knoflook
1 Scheut witte wijn
2 eetlepels azijn (kruiden azijn bv)
Oregano eetlepel beetje ruim
Tijm vlakke eetlepel
1 theelepel zwarte peper

Het is de bedoeling dat dit aardig gekruid is de bonen en rijst moeten er mee op smaak gebracht worden.

Dit roer je dan vervolgens in de grote pan met zwarte bonen en rijst, evt nog wat zout toevoegen. Het is veel maar het hoeft niet meteen op je kunt het laten afkoelen en invriezen voor later.

Dan
In een koekepan banaan in schijfjes bakken het echte recept is met bakbaan met ik vond gewone banaan ook wel lekker, geeft wat contrast smaak, verhit de banaan in een scheut witte wijn. (ongv 1 banaan per persoon)

En dan is het klaar voor op bord

Serveren met op het bord wat sla, bier erbij kan
Tomaten soepje vooraf

improving webcam quality image in python

I wrote a small python application, around a webcam.
Cheap webcams contain a lot of noise and also camera’s in dark places.
What it does is take a snapshot and show it.
Special about it is that for every new image it snapshots, it keeps 75% of the old image
This could be changed (see the quality variable 0.25 results in 75%) (0.1 results in 90%)
or go extreme and type 0.01 keeps 99% (with ghost like appearances out of the sudden)

A side effect is that the camera view seams slow lazy; but finally i can get sharp images from a low tech camera; all the camera noise is canceled out by this simple technique.

I wonder how this would work for those people with a usb star viewer camera.
Note its a draft of code no buttons and space key ends the output
And saves the capture as capture.tif  (not jpg as we went for quality !!)

(don’t click the cros of the app as that will kill python.)

It was written in python 2.6 it uses PIL module and pygame and VideoCapture
Some parts are marked out but are still fun to play with.
I’d love to hear reactions on this, or practical usage; so far its draft this code.
And i’m happy that i now finally capture real sharp images

(but i still dont save them, its only viewing so far)

(using code to html from http://puzzleware.net/CodeHTMLer/default.aspx (told it it was c++ i have not yet found something for python if you no it let me know)

let me know if you find the code usefully ! like to hear from you

import pygame
from VideoCapture import Device
cam = Device()

from PIL import Image, ImageFilter, ImageOps

def image2surface(mypic):
    mode = mypic.mode
    size = mypic.size
    data = mypic.tostring()
    assert mode in "RGB", "RGBA"
    return pygame.image.fromstring(data, size, mode)

quality = 0.25  #equals 75%
s=0
s1=0
s2=0
x=0
oldpic0 = cam.getImage()
oldpic1 = oldpic0
oldpic2 = oldpic1
while (s < 1):
    s2 = s1
    mypic = cam.getImage()
    mypic = ImageOps.autocontrast(mypic)
    mypic = mypic.filter(ImageFilter.DETAIL)
    # mypic = mypic.filter(ImageFilter.SHARPEN)
    #mypic = mypic.filter(ImageFilter.EDGE_ENHANCE)
    oldpic0 = Image.blend(oldpic0,mypic,quality)
  #  oldpic1 = Image.blend(oldpic0,oldpic1,0.5)
  #  oldpic2 = Image.blend(oldpic1,oldpic2,0.5)
  #  oldpic2 = oldpic2.filter(ImageFilter.EDGE_ENHANCE)

    mypic = oldpic0

    #mypic = ImageOps.invert(mypic)

    #mypic = ImageOps.equalize(mypic)
    s1 = mypic.size
    surface = image2surface(mypic)
    if (s1 != s2):
       screen = pygame.display.set_mode(mypic.size)
    screen.blit(surface,(0,0))
    pygame.display.flip()
    x=x+1.0
    print x
    for event in pygame.event.get() :
      if event.type == pygame.KEYDOWN :
        if event.key == pygame.K_SPACE :
          print "Space bar pressed down."
          mypic.save("capture.tif")
          s=1
        elif event.key == pygame.K_ESCAPE :
          print "Escape key pressed down."
      elif event.type == pygame.KEYUP :
        if event.key == pygame.K_SPACE :
          print "Space bar released."
        elif event.key == pygame.K_ESCAPE :
          print "Escape key released."
    if s ==1 :
       pygame.quit