Thursday, August 22, 2019

Arduino Touch Sensors and i2c OLED display tests

Today I have been looking at some inexpensive touch sensors and a 128x64 pixel graphical OLED Display unit.

Its been a while since I have had chance to do much in the way of experimentation, and I don't really have the bench space for much these days (In fact I don't really have a bench any more - its more of a shelf now!) But that's another story.

Anyway, I picked up two touch sensor PCBs and the OLED for just a few pounds. Both are plentifully available on both Amazon or eBay (see bottom of post for links if you want to grab them and experiment yourself).


The touch sensors are a simple PCB with just three connections: +V, 0V and output (hi/lo).

Really easy to use and fantastically sensitive to touch. No false triggering noted. Awesome!


I have two here plugged into the breadboard. One is designated as "dot" and the other is "dash" (as this is a test for a future morse paddle keyer system).

A simple Arduino sketch running on a nano checking if the "dot" "dash" or "both" are touched. (see code below)





The plan is to mount the two touch sensors vertically opposite each other with roughly the same spacing as a conventional paddle key.

Currently I have a simple spacer block mocked up in Tinkercad ready for 3D printing as part of the eventual enclosure for the device.





And at the moment just the the status of the touch sensor inputs is pushed out to the 128x64 pixel OLED Display as plain text.

Thats about all we do for now, but there will of course be a lot more going on than this eventually as it evolves to be part of a touch sensitive  Arduino powered paddle keyer system.

More info will undoubtedly follow.



Sensors : There really isn't much to say about the touch sensors. You simply connect them up to either 3v3 or 5v (either is fine) and ground. The third pin goes high when you touch the pcb (either side), and returns low when you release.  I don't have my scope here, but based on what I can see there doesn't seem to be any appreciable delay or bounce on the make/break action.  You could use them in any number of fun ways as they work through some reasonably thick paper, plastic or fabric - ie: a totally invisible switch if you so needed it.

OLED :  Although I have used the ever popular 16x2 char LCD display units with arduinos many times in the past, this is the first time I have actually got around to interfacing a graphical  OLED device to one.

It took a while to find an arduino library that worked with the OLED.  Initially I could not coax it into life at all, and a little bit of googling was needed to find the correct i2c bus address for it. The usual Adafruit SSD1603 library did talk to it once I edited the bus address, but produced a garbled result with just a couple of lines showing the test patterns with the rest of the display just noise.

A lot of head scratching and some more googling followed!

Eventually the reason for this came to light. I had assumed when I ordered the OLED unit that it was a standard SSD1306 based device, but it turns out it is actually an SH1106 device.

Very similar, but with minor driver differences.

Thankfully the SSD1306Ascii Arduino library by William Germain  https://github.com/greiman/SSD1306Ascii is not only super memory efficient for simple text display on SSD1306 devices -  but also has support for the SH1106 variant included too.  Yay!

If you have a recent version of the Arduino IDE you can load it via the library manager, if not the libs can be grabbed from github at the above address and installed into your arduino libraries subfolder

This is the simple arduino sketch I used as a proof of concept test :


// Simple I2C test for ebay / amazon SH1106 128x64 oled.

#include <wire .h>
#include "SSD1306Ascii.h"
#include "SSD1306AsciiWire.h"

// 0X3C+SA0 - 0x3C or 0x3D
#define I2C_ADDRESS 0x3C

// Define proper RST_PIN if required.
#define RST_PIN -1

// Define the digital inout pins from the touch sensors
const int dotPin = 4;
const int dashPin = 5;

int dotState =0;
int dashState =0;

SSD1306AsciiWire oled;
//------------------------------------------------------------------------------
void setup() {
  Wire.begin();
  Wire.setClock(400000L);

#if RST_PIN >= 0
  oled.begin(&SH1106_128x64, I2C_ADDRESS, RST_PIN);
#else // RST_PIN >= 0
  oled.begin(&SH1106_128x64, I2C_ADDRESS);
#endif // RST_PIN >= 0

oled.clear();  // clear the display
oled.displayRemap(1); //rotate 180 degrees to
oled.setFont(lcd5x7); // select a font
oled.set2X(); // double char size

pinMode(dotPin, INPUT);
pinMode(dashPin, INPUT);  
  
  
}
//------------------------------------------------------------------------------
void loop() {

oled.setRow(1); // move to the 1,1 position
oled.setCol(1);
    
dotState=digitalRead(dotPin); //read the inputs from the touch sensor boards
dashState=digitalRead(dashPin);   

if (dotState==HIGH && dashState==HIGH){
    oled.println("Both");
} else {
  if (dotState==HIGH){
      oled.println("Dot ");
  }else if (dashState==HIGH){
      oled.println("Dash");
  }else{
    oled.clear();
 }
}


}


That should all pretty much make sense without a lot of further explanation.  I will post a lot more info as the project progresses in the future.

If you want any of the parts I used you can find them here:

Touch sensors : https://amzn.to/2Nq8p0c
OLED i2c screen : https://amzn.to/31XItwW
Arduino Nano clone : https://amzn.to/2MwVR7t

And if you need a breadboard and jumper cables there is a nice starter kit : https://amzn.to/31UkHSn



Friday, March 08, 2019

Balloon Project

It's been a few years now since I last had chance to blog about high altitude balloon projects.

Sadly I doubt now I shall be in a position to mount my own excursion to the upper atmosphere in the foreseeable future as these days I lack access to any kind of workshop or test bench, or the funds and assistance needed for any projects so am very limited in what I am able to do.

I was therefore really pleased to discover today the iforce2d YouTube  collection of videos on his arduino based balloon project. It so closely matched my planned research feel I should share it here in the hope that it is of some benefit to those who still visit my blog from time to time.


If you pop over to https://youtu.be/8wRFqRTZFOM (or play the above video) you will find the first of five excellent videos following the design, test, build and flight of his amateur high altitude balloon project.

Really excellent project, with some useful data and many useful pointers and learning outcomes.

Friday, November 16, 2018

Can you really do anything useful with a Chomebook?



As the title says, I've wondered for a while now if it is really possible to do anything beyond a little document editing and web browsing using a chromebook.

Today I bit the bullet and ordered an acer r13 2in1 chromebook to find out, and I plan to share my experiences with you when it arrives.

I must admit, I have done a little digging beforehand to get some idea of relative specs and capabilities, and also what to expect...

...And its a bit of a minefield in some ways.

Basically, not all chromebooks are created equal. There is a huge variance in CPU spec, RAM, disk size and screen resolution. add to this that some chromebooks now support both android apps and linux apps out of the box, some can be coaxed into linux app support via using the unstable release channel (more on that in a future article) and some will just not support linux ever. (there is also option to dual boot into linux on most chromebooks, though how easy this is and how well this works varies just as much)

Why do I need linux apps?

Well, putting it simply the web based chrome OS apps, and those available to download from the chrome store are apparently great for what they are intended, but for any serious dev work, or graphics / photo manipulations it seems the consensus of opinion is that they are too lightweight and web reliant. The same is true of most android apps too, (though some notable exceptions are snapseed and Adobe Lightroom CC which are excellent android based apps for photo manipulation that I use frequently on my android tablet).

As somone who spends a lot of time writing code in HTML5, CSS, Javascript, PHP, Python etc, and also does a lot of graphical work, I really need tools like Sublime Text and Gimp - So the ability to either run linux apps natively or to dual boot is pretty much essential. 

I also tend to use a wacom tablet and stylus quite often, and would like to be able to work in the same way direct on screen, so a 2in1 laptop/tablet form factor is also important to me. 

So why take this gamble? And why is a linux zealot like myself selling out to the (do no) evil Google empire?

Well, partly out of curiosity, partly because my daily life is so entwined in Google products that I doubt it will make much difference anyway, and partly out of being a cheapskate!

Todays purchase is as I mentioned  an Acer r13 2in1 chromebook   (pictured above) which has just in the last few hours had a price drop to only £299. Which for a brand-new-in-the-box 13" laptop that doubles up as a tablet (with stylus support), coming with 4gigs of RAM and 64Gig solid state HDD that does full 1080p resolution and has upto 12 hours of battery life... well, thats pretty tempting!  (link takes you to the ebay site for Currys/PCworld who seem to have been first to offer this price reduction)

NB: thats a clearance price as there is a newer model due out any day now, though thats set to retail for near the £700 price point for a spec that doesnt seem that much better. Snap up a bargain???

From the research Ive done this looks like the best option for a combined chromebook/tablet with sensible specs that "should"!! (fingers crossed!!) support both android and linux apps, and can be made to dual boot into linux as a third option. (I really hope the reports I have read are correct!)

When it arrives, I plan to publish a series of posts - or possibly even a dedicated blog/vlog of the trials, tribulations and hopefully triumphs along the journey to making it a useful dev machine suitable for todays nomadic developers!

Watch this space for updates...





Thursday, October 25, 2018

More moocs!

What..? another moocs post??

Yes, it seems that all I blog about these days are online courses, and this is perhaps to be expected as I've been actively investigating the typical range and quality of courses available.

The reason for this is simple : Research....

I'm leading a small team of engineers and coders in an exciting new web-based learning project based around the popular Raspberry Pi as a platform for both software and hardware projects, as part of the RaspberryPi Mug online magazine. (watch this space for updates!)

Anyway, as the post title suggests, I thought it worth a quick blog post about another one of the recent mooc collections I have been researching.

The BitDegree site tends not to be indexed by the bigger online mooc spiders, but has a great range of courses and as most of the courses are free, worth a mention here.

main categories are: (opens in a new window)

* web development
* game development
* data science
* programming languages
* crypto and blockchain
* marketing
* graphic design
* ecommerce
* business
* personal development
* machine learning
* information security

I personally have looked at the web dev, programming and info security courses, and found them to be really good.

many of the courses are "gamified" (to use their terminology), where you are set interactive learning challenges, with feedback and a score based incentive system.

If you fancy upskilling in any of the above areas, BitDegree is a good place to start.

Wednesday, July 25, 2018

Lean HTML, CSS, Javascript and more online for free!

As regular visitors to my blog will realise, I often post code snippets and examples for blog and web based projects that I have knocked together and would like to share. Many visitors comment on how useful they find them in their own projects.

But also I get quite a few queries from visitors who are perhaps new to HTML, CSS and JS, who can copy and paste code just fine, but lack the depth of skills needed when it comes to correctly integrating it into their own sites, or extending the functionality to suit their own needs.  I do my best to help out, but at the end of the day, there is no substitute for a basic understanding of web development.

Traditionally you learned to code for the web one of three ways:

  • from a book, 
  • you did a course at school or college,
  • you just looked at the source code for other peoples sites and tried to figure out how stuff worked.


All of these approaches have their own merits, and Im going to say that you should probably do all three anyway. But, for a while now there has been an alternative. I refer of course to MOOCs -(Massive Open Online Courses)  which as regular readers will realise is also something I'm quite passionate about and frequently contribute to or help develop.

There are literally hundreds now out there on web design using HTML5 CSS3 JS, and many go on to cover more complex topics such as JSON, AJAX, jQuery and frameworks like AngularJS etc. Most are well written and provide current and useful learning systems. But many have a downside, you have to pay. Some offer the learning materials free,  but often expect quite large sums to gain certification, or proof of completion of each module. And worse - few of those qualifications really carry much weight with potential employers. Now that's fine if you are just using them to gain skills for your own purposes and enjoyment, but after all that hard work it would be nice to have something tangible to show for your efforts.

Thankfully there is a free* alternative. (I say free with an asterisk because although you can complete all aspects of the courses and projects entirely free - and that includes the proof of competency -  those who can afford to are encouraged to make a donation - and you really should if you find it useful.)

I refer of course to https://freecodecamp.org



In my opinion this is probably the most comprehensive learning system out there for those wanting to gain web skills. Although it may not have the most flashy interface, the simple browser based lessons and coding interface literally take you from zero knowledge through to some advanced level concepts at whatever pace suits your available time and desire to learn. There are literally thousands of hours of learning available.  If I was starting out learning web development, or needed a refresher or skills update (More advanced learners can drop in at any point in the curriculum), this is the on-line learning tool I would choose.You can signup using google or other social media credentials, or via email and a password. This takes just seconds to complete and you are ready to go - starting quite literally at "Hello World!" level. From what I have seen of the curriculum, the steps and order of the learning concepts are pretty much spot on and very similar in fact to the learning model I used when teaching the basics of Web development to a wide range of learners.

As an aside, even if your main interest is not web development - this is still a great introduction to coding concepts and general markup systems which you WILL use at some level in pretty much any coding platform. A good many pundits are predicting that javascript will be one of the most saught after coding skills of the next decade

Back to the system itself, as commented everything is in-browser so the course is completely cross platform. As long as you have a web connection you are good to go. There are various options for customising the interface, control of public and private profile options - including the ability to link your progress and earned qualifications to various social media accounts. Also provided is an integrated forums system where users can interact and discuss their progress and learning - though unlike some MOOCs where a degree of participation in group discussion is mandatory, this is optional.

As commented, this might not be the glitzy course out there, but I rate it as probably one of the best. Why not sign up today and give it a go? 

Friday, June 29, 2018

Blocking load of third party scripts in Blogger until cookies agreed for GDPR Cookie compliance.

More than a month after GDPR came in to force it is still a hot topic. Many people have discovered that their own simple hobby blogs fall within the scope of GDPR compliance and have struggled to understand exactly what they need to do to remain on the right side of the law.

There are plenty of blog posts and guidance sites out there now offering advice on banners, privacy statements, and what it means to be a data controller, which will help the typical blogger navigate this minefield.

But there is one area which even for tech-savvy bloggers is still problematic - that of the rule that no cookies other than a basic non-data based session cookie can be written without the express prior consent of a site visitor from a GDPR country.   Blogspot which host many of my sites have been fairly good about its own control of writing of cookies (though the navbar in some templates still occasionally leak cookies while the cookies banner is on show for some reason??). What they don't cover - simply because it's really not their problem - is control of cookies being written by any third party gadgets that bloggers add to their sites eg: adverts, twitter feeds, social media tools and affiliate product linking.

I suggest at this point all blog owners pay a visit to https://www.cookiemetrix.com/ and type in the url of their blog to see if cookies being written and general GDPR compliance is an met. Checking you site via proxy sites is NOT a safe way of determining this as numerous tests have shown that what is served by proxies can be very different to a direct connection due to their own internal processes. Use a dedicated free GDPR test site like CookieMetrix.

If third party cookies are an issue - read on!

OK.... first a bit of background. How do bloggers normally add a third party script or widget to their Blogspot site?  Well, they could roll up their sleeves and directly edit the HTML/XML code for their templates, but the vast majority will add a Blogspot HTML/Javascript Gadget:







Easy enough... but provides no control over what that script then goes on to do as far as cookie control goes.

What my solution does is to use the same process, but instead create a placeholder for where these scripts will go. This allows us to defer loading those scripts until we have checked if the visitor needs to give cookie consent, and if so that they have.




So how does this work?  Well, most people with a little background in HTML/CSS/Javascript will realise that you can create a <div> tag, give it a unique ID and modify it later. it is the basis of pretty much all dynamic page content. See https://www.w3schools.com/js/js_htmldom_elements.asp for examples etc

The problem is that while you can retrospectively change the style, the text content and most HTML tags within an element, any scripts you try to add either by innerHTML or appending child nodes  will not be executed because the page load has finished and also as a security measure to prevent possible cross-site scripting exploits.... Bugger!

To solve this we need to use some clever asynchronous reading and writing of the script to the page which allows the scripts to still be parsed after the page has finished loading.

I spent quite a bit of time looking at ways of doing this, and in fact for several of my other sites found a solution that works well with most third party includes... but some third party scripts still proved problematic - ones using document.write.

Having gone round in circles for some time I re-discovered in my notes on useful libs Postscribe https://krux.github.io/postscribe/ which actually solves this problem very simply. Yay!

A few quick tests confirmed that it was possible to use postscribe and a little jQuery to retrospectively add executable scripts long after the page load has finished....

And that forms the basis of the system I now use.

So, lets first describe how to use my system to add third party scripts that are deferred until cookie compliance agreement is met, and the I will break down the code to explain how it works.



First, thing you need to do is pop over to https://ipdata.co/  and click on the "Get a free API key" button - because we will be using their API to work out if the visitor is from a GDPR country. You will be emailed a key which you need to include in the code that follows. A free API key gets you 1500 free location checks a day. (You can sign up to a paid solution if your site has more traffic, or you could look at using cookies to save the visitor country code once cookies are  agreed - to limit the number of API calls needed - but that is beyond the scope of this blog post.)



Next, we are going to be using both jQuery and Postscribe libs, so you need to add them to the <head> of your site. To do this open up the theme and click on edit HTML



... And then paste :

<script src='https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js'/>
<script src='https://cdnjs.cloudflare.com/ajax/libs/postscribe/2.0.8/postscribe.min.js'/>

immediately after the opening <head> tag.


Then click on Save Theme to finish.  This means that when your page loads, those two javascript libraries are going to be available for our code which will be further down the page. 

So far so good.


Now, rather than adding the third party code direct to a Blogspot gadget as you would normally, we use the placeholder div giving it a unique id eg: myPlaceHolder1  (you can add as many as you like, but they all need a unique id). using:

<div id="myPlaceHolder1"></div>




Next, create a HTML / Javascript gadget as close to the bottom of the page as you can :




and paste in the following code:



<script>
	

function showDivs(){

   // this is the bit you need to edit =============================================================================================

   // (see blog post for details)

   // add amazon search widget
   $(function() {
     postscribe('#myPlaceHolder1','<script type="text/javascript">amzn_assoc_ad_type ="responsive_search_widget";.... etc .... <\/script>');
   });

   // add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

    //etc...

}

// array of country codes where GDPR cookie agreement is needed (source : https://www.hipaajournal.com/what-countries-are-affected-by-the-gdpr/)
// if other countries needing a cookie accept add the country code to this array 
var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];





//functions and globals 

// set up a interval timer for five second intervals - this will repeatedly  call the function pollCookieAgree that checks for the agreement cookie
// being written which shows that the visitor has agreed to the gdpr cookies terms or it is cancelled due to this
// not being a visitor from a gdpr country
var ourInterval = setInterval("pollCookieAgree()", 5000);


//general purpose cookie reading function
function readCookie(name) {  
            var nameEQ = name + "=";
            var ca = document.cookie.split(';');
            for (var i = 0; i < ca.length; i++) {
                var c = ca[i];
                while (c.charAt(0) == ' ') c = c.substring(1, c.length);
                if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length, c.length);
            }
            return null;
  }



//  interval based listener function - called until cancelled by ourInterval further up the script
function pollCookieAgree(){
	
	// call the readCookie script and grab its returned value
	var myCookie = readCookie("displayCookieNotice");


	if (myCookie==="y"){
	
		// yes - we have found the cookie that shows cookies have been agreed to
		console.log("found the cookie acceptance cookie");
		

		//OK, we have found the cookie that shows that the cookie banner has been accepted


		//we dont need the timer any more
		clearInterval(ourInterval);
    
		// safe to show the divs 
		console.log("Calling the jQuery and postscribe async read and insertion of the ads");
   
   
		// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
		// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
		showDivs();

	}

}



// This is where the main activity takes place!

// lets find out where our visitor is from. We put in a jQuery call to the geolocation API @ ipdata.co to get the users location data
// This returns a JSON formatted block ofdata based on the approximate location (by ipaddr) of the visitor Remember to add your API key!


// No, thats not a typo-----------------------------------------v - the closing ) is down at the bottom of the script
$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',
 
 
	// we set up a callback for the response
	function(geodata) {
        
        
        // dump the result out to the console log for debug  - always handy!
        // there is quite a bit of useful info there - but remember we are trying to meet GDPR!!
        // You really should do nothing else with this data unless your visitor has specifically agreed to it.
        console.log(geodata);

		// now see if the recovered country code (held in geodata,country_code object) 0ccurs in the array of gdpr codes 
		// which we declared right at the top of the script 
		if (reqGDPR.includes(geodata.country_code)){
	
	        // Yes :: OK this is a GDPR situ!   
	        
	        // make a note to the console log that we have realised the visitor is from a GDPR country
            console.log("GDPR visitor from : " + geodata.country_code);
            	
			
			// need do nothing else as the timer based cookie check function will run every five  seconds to check
			// if cookies policy has been accepted, and will not call the ads until that is confirmed 
			  
	
  
		}else{
	
			// Nope, looks like we are good... this visitor is not from a place where GDPR applies : go get those ads!
	
			//we dont need the interval timer any more
			clearInterval(ourInterval);
	
			// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
			// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
			showDivs();
	
		}
	
	
	}
	
	
);	
</script>







First, edit the section about half way down :

$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',

Replacing the highlighted section with your own API key.


Then take the third party script that was provided by twitter or whoever, and edit the code that you have just pasted, replacing the highlighted section between the second set of single quotes with the third party script that you want to include.

// add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

 Repeat for however many third party scripts you have.

IMPORTANT:  you will need to escape the closing script tag in the third party code if there is one.  eg: if the code you pasted from the third party site contains </script>  you need to add a backslash so you end up with <\/script>

Similarly, if the script you paste in has any single quotes in it, you will need to put a backslash before them.

That's it...  that is all you need to do. each time you add a third party script or widget to your site, create a placeholder div, and copy the block of code above, editing the placeholder id and adding the third party script as described.

Your site will now check if the visitor comes from a GDPR region and if so, wait for them to agree too cookies before loading the third party  scripts. If not, they will load up a few seconds after the page is loaded.


And now, the description of how it works...

Lets look at this segment first:




// set up a interval timer for five second intervals - this will repeatedly  call the function pollCookieAgree that checks for the agreement cookie
// being written which shows that the visitor has agreed to the gdpr cookies terms or it is cancelled due to this
// not being a visitor from a gdpr country
var ourInterval = setInterval("pollCookieAgree()", 5000);

.
.
.
.
.


//  interval based listener function - called until cancelled by ourInterval further up the script
function pollCookieAgree(){
	
	// call the readCookie script and grab its returned value
	var myCookie = readCookie("displayCookieNotice");


	if (myCookie==="y"){
	
		// yes - we have found the cookie that shows cookies have been agreed to
		console.log("found the cookie acceptance cookie");
		

		//OK, we have found the cookie that shows that the cookie banner has been accepted


		//we dont need the timer any more
		clearInterval(ourInterval);
    
		// safe to show the divs 
		console.log("Calling the jQuery and postscribe async read and insertion of the ads");
   
   
		// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
		// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
		showDivs();

	}

}





Basically we are creating a "listener" script. The initialization -

var ourInterval = setInterval("pollCookieAgree()", 5000);

Sets up a recurring timer to call the pollCookieAgree function once every five seconds. (This will execute forever until it is cancelled).

This in turn uses the generic cookie reading function readCoolkie(name) to check for the presence of a cookie called "displayCookieNotice". (This will have been written out by Blogspot once a visitor has agreed to the cookie banner terms.)

If it is not found, the function does nothing and waits until it is next called by the interval timer, or is cancelled externally.

If it does find the cookie however, It is safe to continue loading the third part scripts. It first cancels the interval timer as it is no longer needed, then after a log out to the console, calls our function showDivs().


	

function showDivs(){

   // this is the bit you need to edit =============================================================================================

   // (see blog post for details)

   // add amazon search widget
   $(function() {
     postscribe('#myPlaceHolder1','<script type="text/javascript">amzn_assoc_ad_type ="responsive_search_widget";.... etc .... <\/script>');
   });

   // add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

    //etc...

}



This does all the hard work of re-inserting the deferred third party scripts using postscribe.

postscribe('#myPlaceHolder1','<script>.... etc .... <\/script>');

Calls the script source held between the second set of single quotes via postscribe, buffers it and does an async child append to the div target between the first pair of single quotes. Problems with things like scripts with document.write etc are taken care of. Simple as that :)

But...!!!

That is only half the story. If a visitor from a non-GDPR country arrived, the cookie checking script would just loop forever - Blogspot only writes that cookie when someone agrees to the GDPR banner.

We need a second line of attack...

This is where the ipdata API call comes in.




// array of country codes where GDPR cookie agreement is needed (source : https://www.hipaajournal.com/what-countries-are-affected-by-the-gdpr/)
// if other countries needing a cookie accept add the country code to this array 
var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];

.
.
.
.

// lets find out where our visitor is from. We put in a jQuery call to the geolocation API @ ipdata.co to get the users location data
// This returns a JSON formatted block ofdata based on the approximate location (by ipaddr) of the visitor Remember to add your API key!


// No, thats not a typo-----------------------------------------v - the closing ) is down at the bottom of the script
$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',
 
 
	// we set up a callback for the response
	function(geodata) {
        
        
        // dump the result out to the console log for debug  - always handy!
        // there is quite a bit of useful info there - but remember we are trying to meet GDPR!!
        // You really should do nothing else with this data unless your visitor has specifically agreed to it.
        console.log(geodata);

		// now see if the recovered country code (held in geodata,country_code object) 0ccurs in the array of gdpr codes 
		// which we declared right at the top of the script 
		if (reqGDPR.includes(geodata.country_code)){
	
	        // Yes :: OK this is a GDPR situ!   
	        
	        // make a note to the console log that we have realised the visitor is from a GDPR country
            console.log("GDPR visitor from : " + geodata.country_code);
            	
			
			// need do nothing else as the timer based cookie check function will run every five  seconds to check
			// if cookies policy has been accepted, and will not call the ads until that is confirmed 
			  
	
  
		}else{
	
			// Nope, looks like we are good... this visitor is not from a place where GDPR applies : go get those ads!
	
			//we dont need the interval timer any more
			clearInterval(ourInterval);
	
			// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
			// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
			showDivs();
	
		}
	
	
	}
	
	
);	







The first thing we do is build an array of the two char country codes for every country that implements GDPR

var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];

Next we use jQuery to get a JSON formatted object back from the ipdata API and set up a callback:

$.getJSON('https://api.ipdata.co?api-key=01234567890123456789', // we set up a callback for the response function(geodata) {...}

(See https://g7nbp.blogspot.com/2018/04/using-jquery-and-ipdata-api-to-serve.html for more info).

if (reqGDPR.includes(geodata.country_code)){....}

we compare the returned geodata.country_code to our array of GDPR countries, and if there is a match, we do nothing... because our interval timed script will be along shortly to keep checking for the agreement cookie.

If we dont find a match then we realise that this is not a GDPR country anyway, so cancel the timer and call showDivs() to begin writing out the deferred thrid party scripts to the placeholder divs...

Jobsagoodun!

So, we have catered for both GDPR visitors who may or may not have agreed to cookies, and to non GDPR visitors.

I hope that makes sense, and that you can use a variant of the above to solve any third party script issues you may have.

Tuesday, June 26, 2018

Dev work ongoing...

OK, a brief update. EU visitors to my site should now be first greeted by the cookies banner, and only if they agree to it will Amazon and twitter be loaded. Yay! The work still remaining is to build in the script I posted a few weeks back which decides if this is needed by country code of the visitor. This is because the blogger cookie I am using as the trigger for GDPR cookies acceptance is only written for EU visitors, so that alone cannot be relied upon alone. ie: non eu visitors at present are not going to see ads, resources and the twitter feed. A full post with worked examples will follow.

Sunday, June 24, 2018

Updates and missing sidebar stuff...

Just a quick post as it's 3:20am and I've been working on this since 8:00pm last night!

Regular visitors will no doubt have noticed that my sidebar widgets have gone. The reason for this is quite simple. They write cookies without asking. As most people will realise, this is a no-no under GDPR.

Blogspot usefully provides a cookie that shows when a visitor agrees to the cookies usage banner and I've fixed up a fairly straightforward listener script that checks for the presence of this cookie. It works fine...   

The  idea is that where the widgets are I've put simple placeholder div tags and that once my listener verified a visitor has agreed to the use of cookies, to update the placeholder div tags with the third party provided scripts and thus load the Amazon and Twitter feeds.

The problem however is that it's not quite that easy....

Firstly, you can't just fudge it using document.getElementById and innerHTML to modify the div content, because browser security blocks this to prevent cross site scripting attacks. You have to do it properly using the DOM. 

This means using something along the lines of

var sc = document.createElement("script");
sc.setAttribute("src", "https://some site.com/somescript.js");
sc.setAttribute("type", "text/javascript");
document.getElementById("myPlaceHolder").appendChild(sc);
Which appends a script tag within the div itself. 

This pretty much works for most external scripts, and I've used this to good effect on several other sites....

But...

With both the Amazon and Twitter feeds the script that is being called uses document.write to create content. This is bad... Bad because firstly it's a blocking method. As a synchronous write insertion it holds up the page load normally, but as we have now waited until after the page has loaded, and the visitor has agreed to cookies activating my listener script and kicking off the writing off the createElement script, the page has long ago loaded.... And document.write can no longer add to the current div... Doh!

What is needed is an asynchronous method of writing to the placeholder, but of course I have no control over the content of the third party script.

This makes life a lot more complicated!

So, at present I'm looking at using jQuery or an Ajax script to act as an asynchronous buffer for the third party script. This should solve the problem of insertion after page load is complete, but adds a lot of complexity.

I will do a full write up when this is finished, as I'm sure there will be a number of bloggers using Blogspot who are in the same position - having third party scripts that write cookies, but that they have no control over.

In the meantime, sidebar widgets will be off and the site may be a little odd while I run these tests.

As the BBC used to say: "Normal service will resume shortly, please do not adjust your sets" :)

More later...

Friday, June 22, 2018

Keeping older hardware alive and useful

If you are anything like me, you will have a few old PCs and laptops that are getting a bit long in the tooth, perhaps only 32bit architecture. But you can't quite bring yourself to dump them yet. Even though a modern windows install or even most of the recent mainstream Linux distros probably wont even boot, or would be painfully slow... 

... But all is not lost, there are still up-to-date, feature packed Linux distributions that can breath new life into your older systems.

Right now Im creating this blog post using one of my several aging Sony Vaio laptops that are more than a decade old.  Current spec is a 2GHz cpu (pentium 4 mobile) with 2Gigs of ram. It has a fairly respectable 1600x1200 resolution display and supports a second monitor to further expand the desktop - so is a useful machine for development work. Its only real limitation is being a P4.

Despite this handicap, this is how my desktop looks:


(you can click any of the images to see them full size)



A full featured GUI with transparency, shadowing and all the most up-to-date features and software packages found in a Linux distro.


How have I achieved this?  BunsenLabs Helium a lightweight debian based Linux distro, released in both 32bit and 64bit architecture. It quite happily installs on machines with as little as half a gig of ram and just a few gigs of disk space. Despite this, it is highly customisable, and very quick in use.

A few quick menu based tweaks gets you the attractive grey theme, edits the on-screen system info (conky) and the top bar (tint2) and adds the background image. I downloaded a mono-grey icon set (as I wanted a largely mono feel to the desktop) which simply has to be placed in /usr/share/icons/ and other than that, its pretty much stock. You can save all the config changes you make using the BLOB theme manager:


Which makes experimenting with settings and rolling back to an earlier look and feel when things go astray real easy.


So does it work...?

Well yes, I'm editing this using latest version of chrome (which also has a dark theme added)


And despite having a nice desktop with animations, blending and shadowing running, along with half a dozen chrome tabs open, and a VNC session to my development server and some music playing I'm still using only just over half  a gig of RAM and very little CPU.

I'm not going to bother writing up a full howto as there is plenty of info on the BunsenLabs site and in the help forums. In fact all the info you need is actually provided withing the base install itself as it has very comprehensive help menus. Being based on latest stable debian, there is a wealth of info available as well. You really cant go wrong with this!

Instead, just a few more screenshots:


Using Geany editor / IDE for HTML editing and other coding



Yes, that's the full version of GIMP with all the extras!



And that's a VNC remote desktop session running to my development server (which is also running Helium with only half a gig of RAM!)


As lightweight distros go, I really can't fault this. It is much lighter in footprint than Lubuntu which was my previous lightweight distro of choice. Blog followers will realise I'm not one to be easily swayed in my opinions when it comes to distros, but I'm sold.

If you need a lightweight distro, they dont come much lighter than Helium!

Sunday, April 15, 2018

Using JQuery and the IPData API to serve content based on locale

Have you ever wanted to be able to greet visitors to your blog or project website with a personalised greeting in their own language, or to offer content or advertising links based on the country that a visitor to your site is coming from?

If so, this short tutorial post might well help you along the way with that.

You will need to be able to edit the actual HTML content of your site or be able to paste in a and HTML/Javascript block - (which blogger supports either as a widget, or as a code edit via the main interface).

You will first need to visit http://ipdata.co and click on the button to get a free api key which will allow you to make 1500 queries per day.

The code below is pretty much all you need to get started, though you will obviously need to edit it both as far as the countries codes and the actual content you want to serve goes.


<div id="UK" style="display:none"><h1>UK based content goes here</h1></div>
<div id="US" style="display:none"><h1>US based content goes here</h1></div>
<div id="FR" style="display:none"><h1>FR based content goes here</h1></div>
<div id="other" style="display:none"><h1>other language based content goes here</h1></div>



<script src="https://code.jquery.com/jquery-3.2.1.min.js">

//load jquery which makes this much easier! 

</script>



<script>

    // put in an API call to ipdata.co to get the users location data
    $.getJSON('https://api.ipdata.co?api-key={your api key}', 

function(geodata) {
        // dump the result out to the console log for debug 
        console.log(geodata);


  // now we can make some choices (you can get a fulllist of country codes here : https://en.wikipedia.org/wiki/ISO_3166-1)
  switch(geodata.country_code) {
    case "GB":
        //country code was == to "GB"
        console.log("I say old chap!");
        // toggle this visibility of the GB specific content div
        UK.style.display="block";
        break;
    case "US":
        console.log("Howdy partner!");
        // toggle this visibility of the US specific content div
        US.style.display="block";
        break;
    case "FR":
        console.log("Bonjour!");
        // toggle this visibility of the FR specific content div
        FR.style.display="block";
        break;

    // add more to suit if needed!
    
    default:
       // toggle this visibility of the catchall div 
       console.log("Hello!!");
       other.style.display="block";
  }

});
</script>



Although the above is pretty self explanatory I will break down exactly what is happening a little:

Firstly create a div for each language block, with its ID set to match the expected 2 char country codes. The important part is to set the style as being   display:none   which means by default they are all hidden. When the script runs and pulls back a country code, the case switch block will change the visibility of one of your divs from  none to block and will thus make just that div visible.

How?  Well,  next we call the jquery API.   If you are not familiar with this API, it makes things like asynchronous  (background) calls to other sites and APIs so much easier. I did start out writing this test in pure javascript but quickly decided that the extra overhead of using jquery is worth it in terms of simplicity, and lets face it blogger loads so much other stuff that one more lib isnt going to break anything!

Once jquery is loaded we can then use its $.getJSON call to grab the output from https://api.ipdata.co

This grabs the visitors ipaddr (and other details) and returns it in a JSON formatted block.


if you visit https://api.ipdata.co in your browser you will see what the return is :
{
    "ip": "x.x.x.x  "city": "London",
    "region": "England",
    "region_code": "ENG",
    "country_name": "United Kingdom",
    "country_code": "GB",
    "continent_name": "Europe",
    "continent_code": "EU",
    "latitude": 51.5142,
    "longitude": -0.0931,
    "asn": "AS60339",
    "organisation": "BT Internet"; "postal": "EC2V",
    "currency": "GBP",
    "currency_symbol": "\u00a3",
    "calling_code": "44",
    "flag": "https://ipdata.co/flags/gb.png",
    "emoji_flag": "\ud83c\uddec\ud83c\udde7",
    "time_zone": "Europe/London",
    "utc_offset": "+0100",
    "is_eu": true,
    "suspicious_factors": {
        "is_tor": false
    }
}

Essentially a list of parameters about your location - based on your public ip address, in a JSON format.  If you are not familiar with json have a look at https://en.wikipedia.org/wiki/Json  basically json is a language independent data format much like XML, which is very commonly used in web  systems to communicate data between systems using API calls. It obeys a machine and human readable structure meaning that you can look at its output and understand it, and more importantly javascript can parse it and extract data.

As commented, you get 1500 free api queries per day, so unless your site has a lot of traffic, this is pretty much fire and forget as far as the api goes.  If you do, you might consider writing out a cookie with the visitors country code when they first arrive, and check for its presence before calling the api for the geodata to save traffic.


(By the way,   $.getJSON('https://api.ipdata.co?api-key={your api key}',       is not a typo - if you are looking for the closing bracket its down at the bottom of the script)

You are probably wondering how we get the return from that?

well the function call on the next line :  function(geodata){    collects the return from the api-call and assigns the raw json object to the variable geodata.



We dump that out to the console.log for debugging purposes. Your browser will usually offer you the ability to inspect the page and give you console log data as part of its output, but that is beyond the scope of this tutorial. If you need more info about this google is your friend :)

OK, so we assume at this point (and you might want to trap some errors if you are an advanced user and make no assumptions!) that geodata is now a JSON formatted  object. This means we can use simple object orientated code to get at the info that we need - ie: the country code

And to do this we assume that geodata.country_code will contain the two char international country code conforming to ISO_3166-1, and thus we can make a simple  case-switch decision tree using:

 switch(geodata.country_code)

and provide a case for any countries we want to serve specific content for using:


case "GB":
        //country code was == to "GB"
        console.log("I say old chap!");
        // toggle this visibility of the GB specific content div
        UK.style.display="block";
        break;

for each country code. where the case statement matches the country code, the console log simply records a suitable remark to identify the country.

The important part is UK.style.display="block";

This changes the visibility of one of the divs lower down to block, meaning it suddenly becomes visible, whereas all its counterparts remain hidden.


If none of the case statements find a match, then the default scenario is launched by making the div with the "other" id visible. This allows you to have a catchall greeting/block/advert in place for visitors from parts of the world you just didn't figure would pay your site a visit.

and that's about it as far as the javascript goes.

That all sounds complicated, but really isn't

You can experiment with any of the key and value pairs eg: currency and GBP in the same way, but beware of the actual lon/lat info reported - Im certainly not in London right now, though my ISPs core switch is. If you need finer grain geolocation info then you can look at the HTML5 geolocation system, but be aware that unlike the above script which is silent, that asking for more high resolution geo loc info via html5 will bring up a prompt in the browser for a permission.

I hope that's a help getting started with your location specific greetings  I will leave you with https://www.omniglot.com/language/phrases/hello.htm   and the request "do no evil". This script is not to discriminate, but to make more inclusive. I have travelled through many counties in the world now, and been welcomed in all. The sooner we realise we are all citizens of earth and that artificially created boundaries and differences  are meaningless, the better for all.

Wednesday, September 18, 2013

A Nostradamus moment?

Just a couple of days back I wrote a blog post entitled MOOCs? where I also drew reference to a blog post I wrote a couple of years back where I discussed my worries that the current ICT curriculum in UK schools is not a substitute for learning Computer Science : ICT is not CS!  Which seems to have almost been prophetic when two directly related posts cropped up on the web yesterday.

The first web announcement was spotted on develop-online.net a website dedicated to development of software, games, and interactive media, which highlights the new government outlines for the new Computer Science course which is to replace ICT http://www.develop-online.net/news/45378/UK-Govt-outlines-computer-science-curriculum Which very much echos my thoughts and hopefully addresses the shortcomings of ICT as it is at present. The new curriculum  aims to deliver a structured introduction to computer science starting at KS1 and adding new concepts as well as building upon principles through to KS4.


           "The government has outlined the new curriculum for computer science in schools.

A program has been drawn up for Key stages one through four, which will mean students will begin learning computer science from early primary school and then throughout their education.

The statutory guidance states the aim of the national curriculum for computing is to ensure students can understand and apply the fundamental principles and concepts of computer science, including abstraction, logic, algorithms and data representation.

Pupils will also be taught to analyse problems in computational terms with practical experience of writing computer programs to solve them, to evaluate and apply information technology analytically to solve problems, and also ensure pupils are responsible, competent, confident and creative users of such information and communication technology.

The publication stated that such a “high quality education equips pupils to use computational thinking and creativity to understand and change the world”.
The new GSCE subject in computing will replace ICT from September 2014."


 
Reading through the key stage subject contents listed, this certainly looks like an improvement on the existing ICT syllabuses and will be welcomed I feel by both pupils and some educators.... 

....But there lies one of the problems...

With a course of this type, the educator - the person delivering the material will really make or break the learning experience that children have at all levels. The GCSE Computer Science syllabus really needs someone a little more "geeky" to deliver it well, someone who has a good understanding of the concepts to be delivered and an active interest in the area, as well as general teaching skills and the enthusiasm for the subject that can be passed to the pupils. I have no doubt that some - possibly many existing teachers of ICT will be more than capable of meeting the demands... But I am equally sure that many may struggle. Which sort of brings me round to my point...  How can these complex subjects be brought home to pupils when the teachers may not be fully up to speed themselves on the subject?    - MOOCs may be the answer... High quality interactive online learning material produced by the examining entities themselves. ??

...And this isnt perhaps out of the question...


The second web announcement yesterday is that Cambridge University Press announce that their GCSE Computing MOOC goes live at the end of the month (September 30th). This is a OCR accredited GCSE in computing course thats available and supported online right now, based around learning using the Raspberry PI platform.  http://www.cambridgegcsecomputing.org/about-us 

             "It's a GCSE, but not as you know it... Computing rules the world, or at least a large part of it. Cambridge GCSE Computing Online will provide free and open access to OCR’s GCSE in Computing, supported by resources from the Raspberry Pi Foundation and Cambridge University Press. Together we’re busy creating a ground-breaking site to help you make sense of the technologies and opportunities this amazing vehicle offers in industry, education and every aspect of our daily lives. Through a mixture of videos, animations and interactive exercises, the content is being designed to challenge and inspire you. We know that studying Computing is about using creativity and problem-solving to unlock opportunities all around you, inside the classroom and far beyond it."



This will be Freely available  (Free as in beer to use the linux analogy) to schools to help deliver this qualification and will provide a valuable resource, but more crucially is also available to students already in education worried that they have been disadvantaged by the current ICT curriculum. But they would have make their own arrangements to sit the exam - and it is important to stress that :  

             "No OCR GCSE Computing certificate will be available direct through the Cambridge GCSE Computing Online website but the content will help students prepare for the exam." 

So individuals interested in this qualification would need to see if a local school or college was  able to host the final examination.  Im sure that when this takes off a number of local groups will emerge, and due to the Raspberry Pi involvement it will be a hot topic in local Hackerspaces and groups, attracting interest from enthusiasts so finding enough people in a geographic area to make it worth while putting on an exam, or perhaps travelling to where one is running should not be out of the question.


In my opinion this has been a looooong time coming, and will be too late for some, but at least now there is light at the end of the tunnel for those students who are frustrated in their efforts to learn "Computing" rather than having "ICT" thrust upon them.


Sunday, September 15, 2013

MOOCs?

Nearly two years back I wrote a blog post entitled ICT is not CS! where I wrote about some of the reasons why I feel that the UK (and some other countries too) are falling behind with education in computer science. An article which caused  discussion in some circles, and has indirectly lead to several projects that I have heard about where others with similar worries have stepped in to run various projects with schools in their own areas.  Thanks to everyone who gave feedback, and especially to those who went on to do something positive.

Thankfully the situation in the UK is improving now, despite resistance and scepticism from some. There are a number of initiatives that have featured in the national press which seem to be slowly filtering into place now. There are also a growing number of technology education based events throughout the country now, though some of them take a little finding out about unless you follow the right news feeds on twitter and facebook. But they are out there!

Sadly though, its still not true today throughout the world. There are still so many places where Science and technology education is scarce or non-existent...  But thanks to widespread availability of the internet, this may not always be the case. Even if there is no local source of science and technology education there have been for a year or so now a wide range of on-line education systems developed by some of the worlds leading universities delivered and  graded for free simply to benefit the world of education. I refer of course to MOOCs - (Massive Open Online Courses).

Today I read this: http://mobile.nytimes.com/2013/09/15/magazine/the-boy-genius-of-ulan-bator.html?ref=magazine&_r=0&  which is what has prompted this post...  (credit to adafruit for the orignial tweet which drew it to my attention)

It describes the achievements of Battushig Myanganbayar from Mongolia, a country where "a third of the population is nomadic, living in round white felt tents called gers on the vast steppe". At the age of 15 "became one of 340 students out of 150,000 to earn a perfect score in Circuits and Electronics, a sophomore-level class at M.I.T. and the first Massive Open Online Course, or MOOC — a college course filmed and broadcast free or nearly free to anyone with an Internet connection — offered by the university."   The article also goes on to descibe how "Battushig’s success also showed that schools could use MOOCs to find exceptional students all over the globe. After the course, Kim and Zurgaanjin suggested that Battushig apply to M.I.T., and he has just started his freshman year — one of 88 international students in a freshman class of 1,116. Stuart Schmill, the dean of admissions, said Battushig’s perfect score proved that he could handle the work."

There can be little doubting therefore the value of these courses not only in developing countries, but the world over.

This has however raise some interesting questions about the future of education in some circles... Does this spell the beginning of the end of traditional education as some have suggested? Is it a "fad"?   Somehow I doubt that...  We have had the "Open University" Here in the UK for a number of years (albeit not for free), yet still see record attendance figures at our colleges and universities still.  I feel I should add that had it not been for the OU TV programs on the BBC back in the 70's and 80's I almost certainly would not have the interests in computing, science and technology I have now.  I tend to view these MOOCs courses in the same way - as a source of inspiration for the scientists and engineers of tomorrow.

A handful of UK colleges and universities are now offering a limited range of MOOCs, but  I think its time a lot more looked towards the production of their own MOOCs as not only a public service, but as a way of finding the brightest and best students who may otherwise not have found a way to their doors. And more importantly aiming them not only at school leavers - but at those in secondary education too... Especially if those courses had national accreditation comparable to the courses currently offered in the national curriculum.

It really is time the everyone started making education cheaper, more accessible and more meaningful to the future aspirations of our children - MOOCs may well be the means to do that.











Links:

http://en.wikipedia.org/wiki/Massive_open_online_course

http://www.openculture.com/free_certificate_courses


http://www.bdpa-detroit.org/portal/index.php?Itemid=20&catid=29:education&id=57:moocs-top-10-sites-for-free-education-with-elite-universities&option=com_content&view=article

http://ocw.mit.edu/index.htm