Friday, November 16, 2018

Can you really do anything useful with a Chomebook?



As the title says, I've wondered for a while now if it is really possible to do anything beyond a little document editing and web browsing using a chromebook.

Today I bit the bullet and ordered an acer r13 2in1 chromebook to find out, and I plan to share my experiences with you when it arrives.

I must admit, I have done a little digging beforehand to get some idea of relative specs and capabilities, and also what to expect...

...And its a bit of a minefield in some ways.

Basically, not all chromebooks are created equal. There is a huge variance in CPU spec, RAM, disk size and screen resolution. add to this that some chromebooks now support both android apps and linux apps out of the box, some can be coaxed into linux app support via using the unstable release channel (more on that in a future article) and some will just not support linux ever. (there is also option to dual boot into linux on most chromebooks, though how easy this is and how well this works varies just as much)

Why do I need linux apps?

Well, putting it simply the web based chrome OS apps, and those available to download from the chrome store are apparently great for what they are intended, but for any serious dev work, or graphics / photo manipulations it seems the consensus of opinion is that they are too lightweight and web reliant. The same is true of most android apps too, (though some notable exceptions are snapseed and Adobe Lightroom CC which are excellent android based apps for photo manipulation that I use frequently on my android tablet).

As somone who spends a lot of time writing code in HTML5, CSS, Javascript, PHP, Python etc, and also does a lot of graphical work, I really need tools like Sublime Text and Gimp - So the ability to either run linux apps natively or to dual boot is pretty much essential. 

I also tend to use a wacom tablet and stylus quite often, and would like to be able to work in the same way direct on screen, so a 2in1 laptop/tablet form factor is also important to me. 

So why take this gamble? And why is a linux zealot like myself selling out to the (do no) evil Google empire?

Well, partly out of curiosity, partly because my daily life is so entwined in Google products that I doubt it will make much difference anyway, and partly out of being a cheapskate!

Todays purchase is as I mentioned  an Acer r13 2in1 chromebook   (pictured above) which has just in the last few hours had a price drop to only £299. Which for a brand-new-in-the-box 13" laptop that doubles up as a tablet (with stylus support), coming with 4gigs of RAM and 64Gig solid state HDD that does full 1080p resolution and has upto 12 hours of battery life... well, thats pretty tempting!  (link takes you to the ebay site for Currys/PCworld who seem to have been first to offer this price reduction)

NB: thats a clearance price as there is a newer model due out any day now, though thats set to retail for near the £700 price point for a spec that doesnt seem that much better. Snap up a bargain???

From the research Ive done this looks like the best option for a combined chromebook/tablet with sensible specs that "should"!! (fingers crossed!!) support both android and linux apps, and can be made to dual boot into linux as a third option. (I really hope the reports I have read are correct!)

When it arrives, I plan to publish a series of posts - or possibly even a dedicated blog/vlog of the trials, tribulations and hopefully triumphs along the journey to making it a useful dev machine suitable for todays nomadic developers!

Watch this space for updates...





Thursday, October 25, 2018

More moocs!

What..? another moocs post??

Yes, it seems that all I blog about these days are online courses, and this is perhaps to be expected as I've been actively investigating the typical range and quality of courses available.

The reason for this is simple : Research....

I'm leading a small team of engineers and coders in an exciting new web-based learning project based around the popular Raspberry Pi as a platform for both software and hardware projects, as part of the RaspberryPi Mug online magazine. (watch this space for updates!)

Anyway, as the post title suggests, I thought it worth a quick blog post about another one of the recent mooc collections I have been researching.

The BitDegree site tends not to be indexed by the bigger online mooc spiders, but has a great range of courses and as most of the courses are free, worth a mention here.

main categories are: (opens in a new window)

* web development
* game development
* data science
* programming languages
* crypto and blockchain
* marketing
* graphic design
* ecommerce
* business
* personal development
* machine learning
* information security

I personally have looked at the web dev, programming and info security courses, and found them to be really good.

many of the courses are "gamified" (to use their terminology), where you are set interactive learning challenges, with feedback and a score based incentive system.

If you fancy upskilling in any of the above areas, BitDegree is a good place to start.

Wednesday, July 25, 2018

Lean HTML, CSS, Javascript and more online for free!

As regular visitors to my blog will realise, I often post code snippets and examples for blog and web based projects that I have knocked together and would like to share. Many visitors comment on how useful they find them in their own projects.

But also I get quite a few queries from visitors who are perhaps new to HTML, CSS and JS, who can copy and paste code just fine, but lack the depth of skills needed when it comes to correctly integrating it into their own sites, or extending the functionality to suit their own needs.  I do my best to help out, but at the end of the day, there is no substitute for a basic understanding of web development.

Traditionally you learned to code for the web one of three ways:

  • from a book, 
  • you did a course at school or college,
  • you just looked at the source code for other peoples sites and tried to figure out how stuff worked.


All of these approaches have their own merits, and Im going to say that you should probably do all three anyway. But, for a while now there has been an alternative. I refer of course to MOOCs -(Massive Open Online Courses)  which as regular readers will realise is also something I'm quite passionate about and frequently contribute to or help develop.

There are literally hundreds now out there on web design using HTML5 CSS3 JS, and many go on to cover more complex topics such as JSON, AJAX, jQuery and frameworks like AngularJS etc. Most are well written and provide current and useful learning systems. But many have a downside, you have to pay. Some offer the learning materials free,  but often expect quite large sums to gain certification, or proof of completion of each module. And worse - few of those qualifications really carry much weight with potential employers. Now that's fine if you are just using them to gain skills for your own purposes and enjoyment, but after all that hard work it would be nice to have something tangible to show for your efforts.

Thankfully there is a free* alternative. (I say free with an asterisk because although you can complete all aspects of the courses and projects entirely free - and that includes the proof of competency -  those who can afford to are encouraged to make a donation - and you really should if you find it useful.)

I refer of course to https://freecodecamp.org



In my opinion this is probably the most comprehensive learning system out there for those wanting to gain web skills. Although it may not have the most flashy interface, the simple browser based lessons and coding interface literally take you from zero knowledge through to some advanced level concepts at whatever pace suits your available time and desire to learn. There are literally thousands of hours of learning available.  If I was starting out learning web development, or needed a refresher or skills update (More advanced learners can drop in at any point in the curriculum), this is the on-line learning tool I would choose.You can signup using google or other social media credentials, or via email and a password. This takes just seconds to complete and you are ready to go - starting quite literally at "Hello World!" level. From what I have seen of the curriculum, the steps and order of the learning concepts are pretty much spot on and very similar in fact to the learning model I used when teaching the basics of Web development to a wide range of learners.

As an aside, even if your main interest is not web development - this is still a great introduction to coding concepts and general markup systems which you WILL use at some level in pretty much any coding platform. A good many pundits are predicting that javascript will be one of the most saught after coding skills of the next decade

Back to the system itself, as commented everything is in-browser so the course is completely cross platform. As long as you have a web connection you are good to go. There are various options for customising the interface, control of public and private profile options - including the ability to link your progress and earned qualifications to various social media accounts. Also provided is an integrated forums system where users can interact and discuss their progress and learning - though unlike some MOOCs where a degree of participation in group discussion is mandatory, this is optional.

As commented, this might not be the glitzy course out there, but I rate it as probably one of the best. Why not sign up today and give it a go? 

Friday, June 29, 2018

Blocking load of third party scripts in Blogger until cookies agreed for GDPR Cookie compliance.

More than a month after GDPR came in to force it is still a hot topic. Many people have discovered that their own simple hobby blogs fall within the scope of GDPR compliance and have struggled to understand exactly what they need to do to remain on the right side of the law.

There are plenty of blog posts and guidance sites out there now offering advice on banners, privacy statements, and what it means to be a data controller, which will help the typical blogger navigate this minefield.

But there is one area which even for tech-savvy bloggers is still problematic - that of the rule that no cookies other than a basic non-data based session cookie can be written without the express prior consent of a site visitor from a GDPR country.   Blogspot which host many of my sites have been fairly good about its own control of writing of cookies (though the navbar in some templates still occasionally leak cookies while the cookies banner is on show for some reason??). What they don't cover - simply because it's really not their problem - is control of cookies being written by any third party gadgets that bloggers add to their sites eg: adverts, twitter feeds, social media tools and affiliate product linking.

I suggest at this point all blog owners pay a visit to https://www.cookiemetrix.com/ and type in the url of their blog to see if cookies being written and general GDPR compliance is an met. Checking you site via proxy sites is NOT a safe way of determining this as numerous tests have shown that what is served by proxies can be very different to a direct connection due to their own internal processes. Use a dedicated free GDPR test site like CookieMetrix.

If third party cookies are an issue - read on!

OK.... first a bit of background. How do bloggers normally add a third party script or widget to their Blogspot site?  Well, they could roll up their sleeves and directly edit the HTML/XML code for their templates, but the vast majority will add a Blogspot HTML/Javascript Gadget:







Easy enough... but provides no control over what that script then goes on to do as far as cookie control goes.

What my solution does is to use the same process, but instead create a placeholder for where these scripts will go. This allows us to defer loading those scripts until we have checked if the visitor needs to give cookie consent, and if so that they have.




So how does this work?  Well, most people with a little background in HTML/CSS/Javascript will realise that you can create a <div> tag, give it a unique ID and modify it later. it is the basis of pretty much all dynamic page content. See https://www.w3schools.com/js/js_htmldom_elements.asp for examples etc

The problem is that while you can retrospectively change the style, the text content and most HTML tags within an element, any scripts you try to add either by innerHTML or appending child nodes  will not be executed because the page load has finished and also as a security measure to prevent possible cross-site scripting exploits.... Bugger!

To solve this we need to use some clever asynchronous reading and writing of the script to the page which allows the scripts to still be parsed after the page has finished loading.

I spent quite a bit of time looking at ways of doing this, and in fact for several of my other sites found a solution that works well with most third party includes... but some third party scripts still proved problematic - ones using document.write.

Having gone round in circles for some time I re-discovered in my notes on useful libs Postscribe https://krux.github.io/postscribe/ which actually solves this problem very simply. Yay!

A few quick tests confirmed that it was possible to use postscribe and a little jQuery to retrospectively add executable scripts long after the page load has finished....

And that forms the basis of the system I now use.

So, lets first describe how to use my system to add third party scripts that are deferred until cookie compliance agreement is met, and the I will break down the code to explain how it works.



First, thing you need to do is pop over to https://ipdata.co/  and click on the "Get a free API key" button - because we will be using their API to work out if the visitor is from a GDPR country. You will be emailed a key which you need to include in the code that follows. A free API key gets you 1500 free location checks a day. (You can sign up to a paid solution if your site has more traffic, or you could look at using cookies to save the visitor country code once cookies are  agreed - to limit the number of API calls needed - but that is beyond the scope of this blog post.)



Next, we are going to be using both jQuery and Postscribe libs, so you need to add them to the <head> of your site. To do this open up the theme and click on edit HTML



... And then paste :

<script src='https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js'/>
<script src='https://cdnjs.cloudflare.com/ajax/libs/postscribe/2.0.8/postscribe.min.js'/>

immediately after the opening <head> tag.


Then click on Save Theme to finish.  This means that when your page loads, those two javascript libraries are going to be available for our code which will be further down the page. 

So far so good.


Now, rather than adding the third party code direct to a Blogspot gadget as you would normally, we use the placeholder div giving it a unique id eg: myPlaceHolder1  (you can add as many as you like, but they all need a unique id). using:

<div id="myPlaceHolder1"></div>




Next, create a HTML / Javascript gadget as close to the bottom of the page as you can :




and paste in the following code:



<script>
	

function showDivs(){

   // this is the bit you need to edit =============================================================================================

   // (see blog post for details)

   // add amazon search widget
   $(function() {
     postscribe('#myPlaceHolder1','<script type="text/javascript">amzn_assoc_ad_type ="responsive_search_widget";.... etc .... <\/script>');
   });

   // add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

    //etc...

}

// array of country codes where GDPR cookie agreement is needed (source : https://www.hipaajournal.com/what-countries-are-affected-by-the-gdpr/)
// if other countries needing a cookie accept add the country code to this array 
var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];





//functions and globals 

// set up a interval timer for five second intervals - this will repeatedly  call the function pollCookieAgree that checks for the agreement cookie
// being written which shows that the visitor has agreed to the gdpr cookies terms or it is cancelled due to this
// not being a visitor from a gdpr country
var ourInterval = setInterval("pollCookieAgree()", 5000);


//general purpose cookie reading function
function readCookie(name) {  
            var nameEQ = name + "=";
            var ca = document.cookie.split(';');
            for (var i = 0; i < ca.length; i++) {
                var c = ca[i];
                while (c.charAt(0) == ' ') c = c.substring(1, c.length);
                if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length, c.length);
            }
            return null;
  }



//  interval based listener function - called until cancelled by ourInterval further up the script
function pollCookieAgree(){
	
	// call the readCookie script and grab its returned value
	var myCookie = readCookie("displayCookieNotice");


	if (myCookie==="y"){
	
		// yes - we have found the cookie that shows cookies have been agreed to
		console.log("found the cookie acceptance cookie");
		

		//OK, we have found the cookie that shows that the cookie banner has been accepted


		//we dont need the timer any more
		clearInterval(ourInterval);
    
		// safe to show the divs 
		console.log("Calling the jQuery and postscribe async read and insertion of the ads");
   
   
		// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
		// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
		showDivs();

	}

}



// This is where the main activity takes place!

// lets find out where our visitor is from. We put in a jQuery call to the geolocation API @ ipdata.co to get the users location data
// This returns a JSON formatted block ofdata based on the approximate location (by ipaddr) of the visitor Remember to add your API key!


// No, thats not a typo-----------------------------------------v - the closing ) is down at the bottom of the script
$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',
 
 
	// we set up a callback for the response
	function(geodata) {
        
        
        // dump the result out to the console log for debug  - always handy!
        // there is quite a bit of useful info there - but remember we are trying to meet GDPR!!
        // You really should do nothing else with this data unless your visitor has specifically agreed to it.
        console.log(geodata);

		// now see if the recovered country code (held in geodata,country_code object) 0ccurs in the array of gdpr codes 
		// which we declared right at the top of the script 
		if (reqGDPR.includes(geodata.country_code)){
	
	        // Yes :: OK this is a GDPR situ!   
	        
	        // make a note to the console log that we have realised the visitor is from a GDPR country
            console.log("GDPR visitor from : " + geodata.country_code);
            	
			
			// need do nothing else as the timer based cookie check function will run every five  seconds to check
			// if cookies policy has been accepted, and will not call the ads until that is confirmed 
			  
	
  
		}else{
	
			// Nope, looks like we are good... this visitor is not from a place where GDPR applies : go get those ads!
	
			//we dont need the interval timer any more
			clearInterval(ourInterval);
	
			// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
			// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
			showDivs();
	
		}
	
	
	}
	
	
);	
</script>







First, edit the section about half way down :

$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',

Replacing the highlighted section with your own API key.


Then take the third party script that was provided by twitter or whoever, and edit the code that you have just pasted, replacing the highlighted section between the second set of single quotes with the third party script that you want to include.

// add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

 Repeat for however many third party scripts you have.

IMPORTANT:  you will need to escape the closing script tag in the third party code if there is one.  eg: if the code you pasted from the third party site contains </script>  you need to add a backslash so you end up with <\/script>

Similarly, if the script you paste in has any single quotes in it, you will need to put a backslash before them.

That's it...  that is all you need to do. each time you add a third party script or widget to your site, create a placeholder div, and copy the block of code above, editing the placeholder id and adding the third party script as described.

Your site will now check if the visitor comes from a GDPR region and if so, wait for them to agree too cookies before loading the third party  scripts. If not, they will load up a few seconds after the page is loaded.


And now, the description of how it works...

Lets look at this segment first:




// set up a interval timer for five second intervals - this will repeatedly  call the function pollCookieAgree that checks for the agreement cookie
// being written which shows that the visitor has agreed to the gdpr cookies terms or it is cancelled due to this
// not being a visitor from a gdpr country
var ourInterval = setInterval("pollCookieAgree()", 5000);

.
.
.
.
.


//  interval based listener function - called until cancelled by ourInterval further up the script
function pollCookieAgree(){
	
	// call the readCookie script and grab its returned value
	var myCookie = readCookie("displayCookieNotice");


	if (myCookie==="y"){
	
		// yes - we have found the cookie that shows cookies have been agreed to
		console.log("found the cookie acceptance cookie");
		

		//OK, we have found the cookie that shows that the cookie banner has been accepted


		//we dont need the timer any more
		clearInterval(ourInterval);
    
		// safe to show the divs 
		console.log("Calling the jQuery and postscribe async read and insertion of the ads");
   
   
		// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
		// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
		showDivs();

	}

}





Basically we are creating a "listener" script. The initialization -

var ourInterval = setInterval("pollCookieAgree()", 5000);

Sets up a recurring timer to call the pollCookieAgree function once every five seconds. (This will execute forever until it is cancelled).

This in turn uses the generic cookie reading function readCoolkie(name) to check for the presence of a cookie called "displayCookieNotice". (This will have been written out by Blogspot once a visitor has agreed to the cookie banner terms.)

If it is not found, the function does nothing and waits until it is next called by the interval timer, or is cancelled externally.

If it does find the cookie however, It is safe to continue loading the third part scripts. It first cancels the interval timer as it is no longer needed, then after a log out to the console, calls our function showDivs().


	

function showDivs(){

   // this is the bit you need to edit =============================================================================================

   // (see blog post for details)

   // add amazon search widget
   $(function() {
     postscribe('#myPlaceHolder1','<script type="text/javascript">amzn_assoc_ad_type ="responsive_search_widget";.... etc .... <\/script>');
   });

   // add twitter widget
   $(function() {
     postscribe('#myPlaceHolder2','<a class="twitter-timeline" data-width="300" data-height="600" data-theme="dark" .... etc .... <\/script>');
    });

    //etc...

}



This does all the hard work of re-inserting the deferred third party scripts using postscribe.

postscribe('#myPlaceHolder1','<script>.... etc .... <\/script>');

Calls the script source held between the second set of single quotes via postscribe, buffers it and does an async child append to the div target between the first pair of single quotes. Problems with things like scripts with document.write etc are taken care of. Simple as that :)

But...!!!

That is only half the story. If a visitor from a non-GDPR country arrived, the cookie checking script would just loop forever - Blogspot only writes that cookie when someone agrees to the GDPR banner.

We need a second line of attack...

This is where the ipdata API call comes in.




// array of country codes where GDPR cookie agreement is needed (source : https://www.hipaajournal.com/what-countries-are-affected-by-the-gdpr/)
// if other countries needing a cookie accept add the country code to this array 
var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];

.
.
.
.

// lets find out where our visitor is from. We put in a jQuery call to the geolocation API @ ipdata.co to get the users location data
// This returns a JSON formatted block ofdata based on the approximate location (by ipaddr) of the visitor Remember to add your API key!


// No, thats not a typo-----------------------------------------v - the closing ) is down at the bottom of the script
$.getJSON('https://api.ipdata.co?api-key=01234567890123456789',
 
 
	// we set up a callback for the response
	function(geodata) {
        
        
        // dump the result out to the console log for debug  - always handy!
        // there is quite a bit of useful info there - but remember we are trying to meet GDPR!!
        // You really should do nothing else with this data unless your visitor has specifically agreed to it.
        console.log(geodata);

		// now see if the recovered country code (held in geodata,country_code object) 0ccurs in the array of gdpr codes 
		// which we declared right at the top of the script 
		if (reqGDPR.includes(geodata.country_code)){
	
	        // Yes :: OK this is a GDPR situ!   
	        
	        // make a note to the console log that we have realised the visitor is from a GDPR country
            console.log("GDPR visitor from : " + geodata.country_code);
            	
			
			// need do nothing else as the timer based cookie check function will run every five  seconds to check
			// if cookies policy has been accepted, and will not call the ads until that is confirmed 
			  
	
  
		}else{
	
			// Nope, looks like we are good... this visitor is not from a place where GDPR applies : go get those ads!
	
			//we dont need the interval timer any more
			clearInterval(ourInterval);
	
			// we can now use a combination of jQuery and postscribe to proxy to asynchronously modify 
			// each each of the placeholders we set using the function showDivs() which we edited at the top of this script
			showDivs();
	
		}
	
	
	}
	
	
);	







The first thing we do is build an array of the two char country codes for every country that implements GDPR

var reqGDPR = ["AT","BE","BG","HR","CY","CZ","DK","EE","FI","FR","DE","GR","HU","IE","IT","LV","LT","LU","MT","NL","PL","PT","RO","SK","SI","ES","SE","GB"];

Next we use jQuery to get a JSON formatted object back from the ipdata API and set up a callback:

$.getJSON('https://api.ipdata.co?api-key=01234567890123456789', // we set up a callback for the response function(geodata) {...}

(See https://g7nbp.blogspot.com/2018/04/using-jquery-and-ipdata-api-to-serve.html for more info).

if (reqGDPR.includes(geodata.country_code)){....}

we compare the returned geodata.country_code to our array of GDPR countries, and if there is a match, we do nothing... because our interval timed script will be along shortly to keep checking for the agreement cookie.

If we dont find a match then we realise that this is not a GDPR country anyway, so cancel the timer and call showDivs() to begin writing out the deferred thrid party scripts to the placeholder divs...

Jobsagoodun!

So, we have catered for both GDPR visitors who may or may not have agreed to cookies, and to non GDPR visitors.

I hope that makes sense, and that you can use a variant of the above to solve any third party script issues you may have.

Tuesday, June 26, 2018

Dev work ongoing...

OK, a brief update. EU visitors to my site should now be first greeted by the cookies banner, and only if they agree to it will Amazon and twitter be loaded. Yay! The work still remaining is to build in the script I posted a few weeks back which decides if this is needed by country code of the visitor. This is because the blogger cookie I am using as the trigger for GDPR cookies acceptance is only written for EU visitors, so that alone cannot be relied upon alone. ie: non eu visitors at present are not going to see ads, resources and the twitter feed. A full post with worked examples will follow.

Sunday, June 24, 2018

Updates and missing sidebar stuff...

Just a quick post as it's 3:20am and I've been working on this since 8:00pm last night!

Regular visitors will no doubt have noticed that my sidebar widgets have gone. The reason for this is quite simple. They write cookies without asking. As most people will realise, this is a no-no under GDPR.

Blogspot usefully provides a cookie that shows when a visitor agrees to the cookies usage banner and I've fixed up a fairly straightforward listener script that checks for the presence of this cookie. It works fine...   

The  idea is that where the widgets are I've put simple placeholder div tags and that once my listener verified a visitor has agreed to the use of cookies, to update the placeholder div tags with the third party provided scripts and thus load the Amazon and Twitter feeds.

The problem however is that it's not quite that easy....

Firstly, you can't just fudge it using document.getElementById and innerHTML to modify the div content, because browser security blocks this to prevent cross site scripting attacks. You have to do it properly using the DOM. 

This means using something along the lines of

var sc = document.createElement("script");
sc.setAttribute("src", "https://some site.com/somescript.js");
sc.setAttribute("type", "text/javascript");
document.getElementById("myPlaceHolder").appendChild(sc);
Which appends a script tag within the div itself. 

This pretty much works for most external scripts, and I've used this to good effect on several other sites....

But...

With both the Amazon and Twitter feeds the script that is being called uses document.write to create content. This is bad... Bad because firstly it's a blocking method. As a synchronous write insertion it holds up the page load normally, but as we have now waited until after the page has loaded, and the visitor has agreed to cookies activating my listener script and kicking off the writing off the createElement script, the page has long ago loaded.... And document.write can no longer add to the current div... Doh!

What is needed is an asynchronous method of writing to the placeholder, but of course I have no control over the content of the third party script.

This makes life a lot more complicated!

So, at present I'm looking at using jQuery or an Ajax script to act as an asynchronous buffer for the third party script. This should solve the problem of insertion after page load is complete, but adds a lot of complexity.

I will do a full write up when this is finished, as I'm sure there will be a number of bloggers using Blogspot who are in the same position - having third party scripts that write cookies, but that they have no control over.

In the meantime, sidebar widgets will be off and the site may be a little odd while I run these tests.

As the BBC used to say: "Normal service will resume shortly, please do not adjust your sets" :)

More later...

Friday, June 22, 2018

Keeping older hardware alive and useful

If you are anything like me, you will have a few old PCs and laptops that are getting a bit long in the tooth, perhaps only 32bit architecture. But you can't quite bring yourself to dump them yet. Even though a modern windows install or even most of the recent mainstream Linux distros probably wont even boot, or would be painfully slow... 

... But all is not lost, there are still up-to-date, feature packed Linux distributions that can breath new life into your older systems.

Right now Im creating this blog post using one of my several aging Sony Vaio laptops that are more than a decade old.  Current spec is a 2GHz cpu (pentium 4 mobile) with 2Gigs of ram. It has a fairly respectable 1600x1200 resolution display and supports a second monitor to further expand the desktop - so is a useful machine for development work. Its only real limitation is being a P4.

Despite this handicap, this is how my desktop looks:


(you can click any of the images to see them full size)



A full featured GUI with transparency, shadowing and all the most up-to-date features and software packages found in a Linux distro.


How have I achieved this?  BunsenLabs Helium a lightweight debian based Linux distro, released in both 32bit and 64bit architecture. It quite happily installs on machines with as little as half a gig of ram and just a few gigs of disk space. Despite this, it is highly customisable, and very quick in use.

A few quick menu based tweaks gets you the attractive grey theme, edits the on-screen system info (conky) and the top bar (tint2) and adds the background image. I downloaded a mono-grey icon set (as I wanted a largely mono feel to the desktop) which simply has to be placed in /usr/share/icons/ and other than that, its pretty much stock. You can save all the config changes you make using the BLOB theme manager:


Which makes experimenting with settings and rolling back to an earlier look and feel when things go astray real easy.


So does it work...?

Well yes, I'm editing this using latest version of chrome (which also has a dark theme added)


And despite having a nice desktop with animations, blending and shadowing running, along with half a dozen chrome tabs open, and a VNC session to my development server and some music playing I'm still using only just over half  a gig of RAM and very little CPU.

I'm not going to bother writing up a full howto as there is plenty of info on the BunsenLabs site and in the help forums. In fact all the info you need is actually provided withing the base install itself as it has very comprehensive help menus. Being based on latest stable debian, there is a wealth of info available as well. You really cant go wrong with this!

Instead, just a few more screenshots:


Using Geany editor / IDE for HTML editing and other coding



Yes, that's the full version of GIMP with all the extras!



And that's a VNC remote desktop session running to my development server (which is also running Helium with only half a gig of RAM!)


As lightweight distros go, I really can't fault this. It is much lighter in footprint than Lubuntu which was my previous lightweight distro of choice. Blog followers will realise I'm not one to be easily swayed in my opinions when it comes to distros, but I'm sold.

If you need a lightweight distro, they dont come much lighter than Helium!

Sunday, April 15, 2018

Using JQuery and the IPData API to serve content based on locale

Have you ever wanted to be able to greet visitors to your blog or project website with a personalised greeting in their own language, or to offer content or advertising links based on the country that a visitor to your site is coming from?

If so, this short tutorial post might well help you along the way with that.

You will need to be able to edit the actual HTML content of your site or be able to paste in a and HTML/Javascript block - (which blogger supports either as a widget, or as a code edit via the main interface).

You will first need to visit http://ipdata.co and click on the button to get a free api key which will allow you to make 1500 queries per day.

The code below is pretty much all you need to get started, though you will obviously need to edit it both as far as the countries codes and the actual content you want to serve goes.


<div id="UK" style="display:none"><h1>UK based content goes here</h1></div>
<div id="US" style="display:none"><h1>US based content goes here</h1></div>
<div id="FR" style="display:none"><h1>FR based content goes here</h1></div>
<div id="other" style="display:none"><h1>other language based content goes here</h1></div>



<script src="https://code.jquery.com/jquery-3.2.1.min.js">

//load jquery which makes this much easier! 

</script>



<script>

    // put in an API call to ipdata.co to get the users location data
    $.getJSON('https://api.ipdata.co?api-key={your api key}', 

function(geodata) {
        // dump the result out to the console log for debug 
        console.log(geodata);


  // now we can make some choices (you can get a fulllist of country codes here : https://en.wikipedia.org/wiki/ISO_3166-1)
  switch(geodata.country_code) {
    case "GB":
        //country code was == to "GB"
        console.log("I say old chap!");
        // toggle this visibility of the GB specific content div
        UK.style.display="block";
        break;
    case "US":
        console.log("Howdy partner!");
        // toggle this visibility of the US specific content div
        US.style.display="block";
        break;
    case "FR":
        console.log("Bonjour!");
        // toggle this visibility of the FR specific content div
        FR.style.display="block";
        break;

    // add more to suit if needed!
    
    default:
       // toggle this visibility of the catchall div 
       console.log("Hello!!");
       other.style.display="block";
  }

});
</script>



Although the above is pretty self explanatory I will break down exactly what is happening a little:

Firstly create a div for each language block, with its ID set to match the expected 2 char country codes. The important part is to set the style as being   display:none   which means by default they are all hidden. When the script runs and pulls back a country code, the case switch block will change the visibility of one of your divs from  none to block and will thus make just that div visible.

How?  Well,  next we call the jquery API.   If you are not familiar with this API, it makes things like asynchronous  (background) calls to other sites and APIs so much easier. I did start out writing this test in pure javascript but quickly decided that the extra overhead of using jquery is worth it in terms of simplicity, and lets face it blogger loads so much other stuff that one more lib isnt going to break anything!

Once jquery is loaded we can then use its $.getJSON call to grab the output from https://api.ipdata.co

This grabs the visitors ipaddr (and other details) and returns it in a JSON formatted block.


if you visit https://api.ipdata.co in your browser you will see what the return is :
{
    "ip": "x.x.x.x  "city": "London",
    "region": "England",
    "region_code": "ENG",
    "country_name": "United Kingdom",
    "country_code": "GB",
    "continent_name": "Europe",
    "continent_code": "EU",
    "latitude": 51.5142,
    "longitude": -0.0931,
    "asn": "AS60339",
    "organisation": "BT Internet"; "postal": "EC2V",
    "currency": "GBP",
    "currency_symbol": "\u00a3",
    "calling_code": "44",
    "flag": "https://ipdata.co/flags/gb.png",
    "emoji_flag": "\ud83c\uddec\ud83c\udde7",
    "time_zone": "Europe/London",
    "utc_offset": "+0100",
    "is_eu": true,
    "suspicious_factors": {
        "is_tor": false
    }
}

Essentially a list of parameters about your location - based on your public ip address, in a JSON format.  If you are not familiar with json have a look at https://en.wikipedia.org/wiki/Json  basically json is a language independent data format much like XML, which is very commonly used in web  systems to communicate data between systems using API calls. It obeys a machine and human readable structure meaning that you can look at its output and understand it, and more importantly javascript can parse it and extract data.

As commented, you get 1500 free api queries per day, so unless your site has a lot of traffic, this is pretty much fire and forget as far as the api goes.  If you do, you might consider writing out a cookie with the visitors country code when they first arrive, and check for its presence before calling the api for the geodata to save traffic.


(By the way,   $.getJSON('https://api.ipdata.co?api-key={your api key}',       is not a typo - if you are looking for the closing bracket its down at the bottom of the script)

You are probably wondering how we get the return from that?

well the function call on the next line :  function(geodata){    collects the return from the api-call and assigns the raw json object to the variable geodata.



We dump that out to the console.log for debugging purposes. Your browser will usually offer you the ability to inspect the page and give you console log data as part of its output, but that is beyond the scope of this tutorial. If you need more info about this google is your friend :)

OK, so we assume at this point (and you might want to trap some errors if you are an advanced user and make no assumptions!) that geodata is now a JSON formatted  object. This means we can use simple object orientated code to get at the info that we need - ie: the country code

And to do this we assume that geodata.country_code will contain the two char international country code conforming to ISO_3166-1, and thus we can make a simple  case-switch decision tree using:

 switch(geodata.country_code)

and provide a case for any countries we want to serve specific content for using:


case "GB":
        //country code was == to "GB"
        console.log("I say old chap!");
        // toggle this visibility of the GB specific content div
        UK.style.display="block";
        break;

for each country code. where the case statement matches the country code, the console log simply records a suitable remark to identify the country.

The important part is UK.style.display="block";

This changes the visibility of one of the divs lower down to block, meaning it suddenly becomes visible, whereas all its counterparts remain hidden.


If none of the case statements find a match, then the default scenario is launched by making the div with the "other" id visible. This allows you to have a catchall greeting/block/advert in place for visitors from parts of the world you just didn't figure would pay your site a visit.

and that's about it as far as the javascript goes.

That all sounds complicated, but really isn't

You can experiment with any of the key and value pairs eg: currency and GBP in the same way, but beware of the actual lon/lat info reported - Im certainly not in London right now, though my ISPs core switch is. If you need finer grain geolocation info then you can look at the HTML5 geolocation system, but be aware that unlike the above script which is silent, that asking for more high resolution geo loc info via html5 will bring up a prompt in the browser for a permission.

I hope that's a help getting started with your location specific greetings  I will leave you with https://www.omniglot.com/language/phrases/hello.htm   and the request "do no evil". This script is not to discriminate, but to make more inclusive. I have travelled through many counties in the world now, and been welcomed in all. The sooner we realise we are all citizens of earth and that artificially created boundaries and differences  are meaningless, the better for all.