OBrien Science
Saturday, December 10, 2016
Biometric Health Tracking
Notes:
http://eclipsejpa.blogspot.ca/2013/09/restjax-rs-ios7-weblogic-12c-bluetooth.html
http://www.medarcade.com/uploads/4/5/5/4/45547265/ekg_dale_dubin.pdf
Saturday, February 13, 2016
Lower Latitude Jet Stream means lower temperatures ironically
For the last couple years I have noticed that the jet stream has been moving further south in its random meanderings. It now reaches below the 45th parallel. This is partially due to global warming and a combination of the normal 11 and 22 year sunspot cycle.
The jet stream acts as a barrier to arctic air masses - primarily high pressure (clock is high on the wall = clockwise) air masses that mean cold air.
Today (13/02/2016) I noticed the temperature dipped to -28c (-19f) - which means a 50c (90f) difference between the walls of our house.
The canal will be excellent for skating today - except for the wind chill which at 20kph may mean close to -45c temps.
https://twitter.com/_mikeobrien/status/698515768653123584
The jet stream acts as a barrier to arctic air masses - primarily high pressure (clock is high on the wall = clockwise) air masses that mean cold air.
Today (13/02/2016) I noticed the temperature dipped to -28c (-19f) - which means a 50c (90f) difference between the walls of our house.
My FLIR camera is required to get a correspondence second reading - unfortunately it is not made for Canadian weather. The FLIR will work in Alaska along the coast which is warmer than us today.
The canal will be excellent for skating today - except for the wind chill which at 20kph may mean close to -45c temps.
https://twitter.com/_mikeobrien/status/698515768653123584
Tuesday, February 28, 2012
URLs
HTML5
Very impressed with#_OPP or #OttawaPolice ghost car that escorted Canada geese family of 7 across march/queensway N overpass at 0804 on 20130523
https://twitter.com/_mikeobrien/status/337544671268061184
The 5 baby geese made it over the curb with about 100 cars on hold from 3 directions - thank you to the police service.
Very impressed with
https://twitter.com/_mikeobrien/status/337544671268061184
The 5 baby geese made it over the curb with about 100 cars on hold from 3 directions - thank you to the police service.
Sunday, January 15, 2012
Lake Ice Periodic Expansion Wavefronts
I was at my brother in law's place on the shore of the large lake beside Westport, Ontario, Canada. An absolutely amazing phenomena happened. We stood on the lake for about 15 minutes when an expansion wave went through the lake and passed under our feet at a speed that felt like the speed of sound. Everyone has heard cracks when on the ice - but this was something completely different and very rare. It seems that the ice at this early stage in its development has not developed any major defects or cracks and when it undergoes expansion due to further freezing as the night falls. The wave of molecular expansion can be heard a mile away approaching and felt as it moves your feet a fraction of a milimeter and then hits the shore. The sound is sort of like a light sabre sound just under your feet.
Thursday, August 4, 2011
Radar
http://wiki.eclipse.org/EclipseLink/Examples/Radar
R1: Determine layered GIS data for rainfall distribution at 1km resolution @ 200 km range
R2: Provide historical data
R2.1: Provide volumentric data per GPS position
R3: Provide 30 min prediction window
R3.1 Extend prediction window by including surrounding area weather
R4: Provide present status per GPS position
R5: Map noise areas for each radar site
DI1: Database: Derby or Oracle GIS/SDO aware?
DI2: Database Expected Volume
There are 14 levels of rainfall represented by color bands from purple to light blue. If we include the ground color (green-grey), rivers (navy) and borders (black) we have 17 levels. We also want to encode (null/unset/no-data) as white or -1 - this gives us 18 levels which fits in a single 8 bit byte.
There are approximately 500x480 pixels at 1km resolution which works to 234Kb decoded.
We expect 24 x 6 = 144 images / site / day, which comes to 234Kb x 144 = 34Mb / day / site.
We therefore need 12 Gb storage / year / site. A standard 2Tb drive which is around the effective limit of most databases will hold 169 years of data (disregarding compression gains and error handling losses). We should be able to hold our goal of 10 years of radar data for 16 sites comfortably.
This assumes we store raw data without gps coordinates, we may want to only store colored pixels.
Radar Sites:
http://en.wikipedia.org/wiki/Canadian_weather_radar_network
http://en.wikipedia.org/wiki/Beckwith,_Ontario
http://www.msc-smc.ec.gc.ca/projects/nrp/image_e.cfm?scale=true&s_image_querystring=city%3Dfranktown%26number%3D3&s_image_referrer=franktown%5Fe%2Ecfm&city=franktown&number=3
http://radar.weather.gov/Conus/index_loop.php
DI3: Persistence of Image data in JPA
DI4: Image URL capture is locking after a couple weeks
Stack trace:
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:129)
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
java.io.BufferedInputStream.read(BufferedInputStream.java:317)
- locked java.io.BufferedInputStream@c32360
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:695)
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:640)
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
- locked sun.net.www.protocol.http.HttpURLConnection@11b865
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:141)
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:91)
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:456)
org.obrienscience.radar.integration.ResourceManager.captureRadarIndefinitely(ResourceManager.java:699)
org.obrienscience.radar.integration.LiveRadarService.performCapture(LiveRadarService.java:67)
org.obrienscience.radar.integration.ApplicationService.performCapture(ApplicationService.java:326)
org.obrienscience.radar.integration.ApplicationService.performCapture(ApplicationService.java:316)
org.obrienscience.radar.integration.LiveRadarService.performCapture(LiveRadarService.java:57)
org.obrienscience.radar.integration.LiveRadarService.main(LiveRadarService.java:101)
References:
http://www.radar.mcgill.ca/who-we-are/history.html
http://java.sun.com/javase/technologies/desktop/media/
http://en.wikipedia.org/wiki/Numerical_weather_prediction
http://en.wikipedia.org/wiki/Weather_radar
http://www.dtic.mil/dtic/tr/fulltext/u2/a261190.pdf
Scratchpad:
R1: Determine layered GIS data for rainfall distribution at 1km resolution @ 200 km range
R2: Provide historical data
R2.1: Provide volumentric data per GPS position
R3: Provide 30 min prediction window
R3.1 Extend prediction window by including surrounding area weather
R4: Provide present status per GPS position
R5: Map noise areas for each radar site
DI1: Database: Derby or Oracle GIS/SDO aware?
DI2: Database Expected Volume
There are 14 levels of rainfall represented by color bands from purple to light blue. If we include the ground color (green-grey), rivers (navy) and borders (black) we have 17 levels. We also want to encode (null/unset/no-data) as white or -1 - this gives us 18 levels which fits in a single 8 bit byte.
There are approximately 500x480 pixels at 1km resolution which works to 234Kb decoded.
We expect 24 x 6 = 144 images / site / day, which comes to 234Kb x 144 = 34Mb / day / site.
We therefore need 12 Gb storage / year / site. A standard 2Tb drive which is around the effective limit of most databases will hold 169 years of data (disregarding compression gains and error handling losses). We should be able to hold our goal of 10 years of radar data for 16 sites comfortably.
This assumes we store raw data without gps coordinates, we may want to only store colored pixels.
Radar Sites:
http://en.wikipedia.org/wiki/Canadian_weather_radar_network
http://en.wikipedia.org/wiki/Beckwith,_Ontario
http://www.msc-smc.ec.gc.ca/projects/nrp/image_e.cfm?scale=true&s_image_querystring=city%3Dfranktown%26number%3D3&s_image_referrer=franktown%5Fe%2Ecfm&city=franktown&number=3
http://radar.weather.gov/Conus/index_loop.php
DI3: Persistence of Image data in JPA
DI4: Image URL capture is locking after a couple weeks
Stack trace:
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:129)
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
java.io.BufferedInputStream.read(BufferedInputStream.java:317)
- locked java.io.BufferedInputStream@c32360
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:695)
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:640)
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
- locked sun.net.www.protocol.http.HttpURLConnection@11b865
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:141)
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:91)
org.obrienscience.radar.integration.ResourceManager.captureImage(ResourceManager.java:456)
org.obrienscience.radar.integration.ResourceManager.captureRadarIndefinitely(ResourceManager.java:699)
org.obrienscience.radar.integration.LiveRadarService.performCapture(LiveRadarService.java:67)
org.obrienscience.radar.integration.ApplicationService.performCapture(ApplicationService.java:326)
org.obrienscience.radar.integration.ApplicationService.performCapture(ApplicationService.java:316)
org.obrienscience.radar.integration.LiveRadarService.performCapture(LiveRadarService.java:57)
org.obrienscience.radar.integration.LiveRadarService.main(LiveRadarService.java:101)
References:
http://www.radar.mcgill.ca/who-we-are/history.html
http://java.sun.com/javase/technologies/desktop/media/
http://en.wikipedia.org/wiki/Numerical_weather_prediction
http://en.wikipedia.org/wiki/Weather_radar
http://www.dtic.mil/dtic/tr/fulltext/u2/a261190.pdf
Scratchpad:
Genetics
DNA Testing, Storage and Interpretation
https://www.23andme.com/
https://www.23andme.com/
Friday, March 25, 2011
Distributed and Multithreaded Applications
A problem that maps very well to multiple cores and is easily parallelized is the computation of the Mandelbrot set.
The following graph is the result of an experiment where I varied the number of cores used to render each frame of a deep zoom to the limit of ''double'' floating point precision. When I run this algorithm as a traditionally single threaded application it takes up to 800 seconds to render a 1024x1024 grid from 1.0 to 1 x 10^-16. However when I start adding threads I see the best speedup when I use the same number of threads as there are hard processors (non-hyperthreaded). The performance increase nears it's maximum 8 times increase for an Intel Corei7-920 when I approach a thread/line of 512 threads.
As you can see from the graph, we benefit more from a massive number of threads - as long as they are independent. The Mandelbrot calculation however it not homogeneous - computing the central set requires a lot more iteration than outlying areas. This is why each parallel algorithm must be fine tuned to the problem it is solving. If you look at the screen captures of performance during the runs with various thread counts you will see what I mean. The processor is not being exercised at it's maximum capacity when the ''bands'' assigned to particular threads are finished before other threads that are performing more calculations than their peers. If we increase the number of bands - we distribute the unbalanced load among the cores more evenly - at a slight expense of thread coordination/creation/destruction.
Multicore Rendering of Mandelbrot Set
The following runs are on a 1024x1024 grid starting form 1.0 to 0.0000000000000001 that take from 800 to 67 seconds depending on the number of threads used concurrently. Notice that I have a temporary issue with shared variable access between threads - as some of the pixel coloring is off.
As you can see - the processor usage goes from 12% for a single core, through 50% for 8 cores - to 99% for 128+ cores. (we need to leave some CPU cycles to the system so our mouse functions)
Why do we need so many threads? If even one thread takes longer than any other ones that are already completed their work unit - the entire computation is held up. We therefore use more work units than there are threads.
A better algorithm would be to distribute work units asynchronously instead in the current MapReduce synchronous way we currently use. When a thread is finished, it can work on part of the image that is still waiting processing. We would need to distribute work units more like packets in this case.
2 threads on an 8-core i7-920 takes 466 sec
16 threads on an 8-core i7-920 takes 138 sec
128 threads on an 8-core i7-920 takes 114 sec
Thread Contention for Shared Resources:
For our multithreaded Mandelbrot application - which currently is not @ThreadSafe - we encounter resource contention specific to the Graphics context. This type of contention is the same for any shared resource such as a database. The issue is that setting a pixel on the screen is not an atomic operation - it consists of setting the current color and then drawing the pixel (The Java2D API may require multiple internal rendering steps as well). The result of this is that another thread may change the color of the graphics context before the current thread actually writes the pixel - resulting in noise - or more accurately - '''Data Corruption'''.
Note: that no noise or data corruption occurs when we run a single thread. We only get a problem when we run multiple threads concurrently.
The better solution would be designate a host thread that coordinates all the unit of work threads and acts as a single proxy to the GUI - only one thread should update AWT or Swing UI elements - as most of them are not thread safe by design. Multithreaded distributed applications need to be very careful when using GUI elements. For example if I do not introduce at least a 1ms sleep between GUI frames - the entire machine may lock up when 100% of the CPU is given to the calculating threads.
The following graph is the result of an experiment where I varied the number of cores used to render each frame of a deep zoom to the limit of ''double'' floating point precision. When I run this algorithm as a traditionally single threaded application it takes up to 800 seconds to render a 1024x1024 grid from 1.0 to 1 x 10^-16. However when I start adding threads I see the best speedup when I use the same number of threads as there are hard processors (non-hyperthreaded). The performance increase nears it's maximum 8 times increase for an Intel Corei7-920 when I approach a thread/line of 512 threads.
As you can see from the graph, we benefit more from a massive number of threads - as long as they are independent. The Mandelbrot calculation however it not homogeneous - computing the central set requires a lot more iteration than outlying areas. This is why each parallel algorithm must be fine tuned to the problem it is solving. If you look at the screen captures of performance during the runs with various thread counts you will see what I mean. The processor is not being exercised at it's maximum capacity when the ''bands'' assigned to particular threads are finished before other threads that are performing more calculations than their peers. If we increase the number of bands - we distribute the unbalanced load among the cores more evenly - at a slight expense of thread coordination/creation/destruction.
Multicore Rendering of Mandelbrot Set
The following runs are on a 1024x1024 grid starting form 1.0 to 0.0000000000000001 that take from 800 to 67 seconds depending on the number of threads used concurrently. Notice that I have a temporary issue with shared variable access between threads - as some of the pixel coloring is off.
As you can see - the processor usage goes from 12% for a single core, through 50% for 8 cores - to 99% for 128+ cores. (we need to leave some CPU cycles to the system so our mouse functions)
Why do we need so many threads? If even one thread takes longer than any other ones that are already completed their work unit - the entire computation is held up. We therefore use more work units than there are threads.
A better algorithm would be to distribute work units asynchronously instead in the current MapReduce synchronous way we currently use. When a thread is finished, it can work on part of the image that is still waiting processing. We would need to distribute work units more like packets in this case.
1 thread on an 8-core i7-920 takes 778 sec
2 threads on an 8-core i7-920 takes 466 sec
16 threads on an 8-core i7-920 takes 138 sec
128 threads on an 8-core i7-920 takes 114 sec
Thread Contention for Shared Resources:
For our multithreaded Mandelbrot application - which currently is not @ThreadSafe - we encounter resource contention specific to the Graphics context. This type of contention is the same for any shared resource such as a database. The issue is that setting a pixel on the screen is not an atomic operation - it consists of setting the current color and then drawing the pixel (The Java2D API may require multiple internal rendering steps as well). The result of this is that another thread may change the color of the graphics context before the current thread actually writes the pixel - resulting in noise - or more accurately - '''Data Corruption'''.
Note: that no noise or data corruption occurs when we run a single thread. We only get a problem when we run multiple threads concurrently.
color = Mandelbrot.getCurrentColors().get(iterations); color2 = color; // these 2 lines need to be executed atomically - however we do not control the shared graphics context synchronized (color) { // this does not help us with drawRect() mandelbrotManager.getgContext().setColor(color); // drawRect is not atomic, the color of the context may change before the pixel is written by another thread mandelbrotManager.getgContext().drawRect((int)x,(int)y,0,0); } if(color2 != mandelbrotManager.getgContext().getColor()) { System.out.println("_Thread contention: color was changed mid-function: (thread,x,y) " + threadIndex + "," + x + "," + y); // The solution may be to rewrite the pixel until the color is no longer modified or only allow a host thread to write to the GUI } _Thread contention: color was changed mid-function: (thread,x,y) 2,298,22 _Thread contention: color was changed mid-function: (thread,x,y) 15,140,155 _Thread contention: color was changed mid-function: (thread,x,y) 15,140,156 _Thread contention: color was changed mid-function: (thread,x,y) 15,140,157 _Thread contention: color was changed mid-function: (thread,x,y) 15,141,151 _Thread contention: color was changed mid-function: (thread,x,y) 2,307,25 _Thread contention: color was changed mid-function: (thread,x,y) 15,143,154 _Thread contention: color was changed mid-function: (thread,x,y) 15,144,152 _Thread contention: color was changed mid-function: (thread,x,y) 13,0,130 _Thread contention: color was changed mid-function: (thread,x,y) 11,0,110
The better solution would be designate a host thread that coordinates all the unit of work threads and acts as a single proxy to the GUI - only one thread should update AWT or Swing UI elements - as most of them are not thread safe by design. Multithreaded distributed applications need to be very careful when using GUI elements. For example if I do not introduce at least a 1ms sleep between GUI frames - the entire machine may lock up when 100% of the CPU is given to the calculating threads.
Subscribe to:
Posts (Atom)