Tuesday, December 17, 2013

Weird Find in IE9 - phantom scripts

I had a problem in IE9 (!!shock!!)

Something weird would happen when loading a page using back, forward, or pressing enter in the URL Bar when the page is already loaded : my script wouldn't run.

It would load
It would be attached to the DOM
But it wouldn't run, and when I looked at the "script" in F12 tools, the script appeared to be blank.

I had "defer" on it, so I took that off. But no dice. I tried attaching to the head instead of the body (not that that should have made a difference). But no dice.

Then an idea struck me. The script which loads in the "missing" script isn't attached to the DOM when it runs. This was the only thing "unusual" left that could be the source of the problem. References to "window" and "document" were valid, since the "missing" script did show up in the DOM. But what else could it be??

"Loading" Script:

<script src="loader.js">
///1// removeFromDOM(searchDOMFor("script@src=loader.js"));
///2// var s = document.createElement("script");
///3// s.src = "missing.js";
///4// document.body.appendChild(s);
</script>


Above script "missing.js" won't run when it's loaded.
Remove line 1, and it runs.

Go figure~~

edit: To clarify, missing.js!! won't run if the loader.js!! element is removed from the DOM. The missing.js script element is on the DOM and is not removed.

Friday, October 4, 2013

Best Bluetooth Headsets - comments

The headsets:

Motorola S10-HD   ($35 ebay)
Jaybird Bluebuds X   ($150 ebay)
Philips Tapster   ($85 ebay)


I'm not a review web site so I'm not going to go into too many details.  Bottom line, is that if you want a pair for running in the gym, the Motorola are fine and cheap and if you ruin them "so what". If you want to run or play sports --outdoors--, you'll want the Bluebuds. If you want a cord-free headset and don't intend to jump up and down, then the Tapster is the best.

BEST

Audio:
Tapster - by far. Don't know what they did - but Philips squeezed 128kbps quality audio into a bandwidth compatible to bluetooth. Bass is hands down the best bluetooth experience I've had. I don't even bother changing headsets when I get to work any more. Bluebuds were advertised to have better sound, but I don't hear it.

Signal:
Bluebuds - by far. Finally, something they advertise which was accurate. Can hold a signal OUTDOORS when all the others fade away. INDOORS, Tapster actually has the best reach at 60ft through two walls in the men's bathroom. At that distance, Bluebuds was cutting on and off, but Tapster was strong. But outdoors, the only one to play non-stop with the phone in my pocket was the Bluebuds.

Controls - Ease of Use:
Tapster - by far. Once you learn HOW to control it, it's a pleasure to use. Bluebud's controls are convenient to reach but buttons themselves require long presses or really long presses - who's got time for that??

Fit - Stays in place and solid seal:
Tapster
note: Bluebuds WITH their extra ear attachment are actually better at staying in your ear, but cheap plastic mold is a far cry from the solid seal the Tapster makes. But this could be by design, because too much seal and you hear your own footsteps and movements (as in the Tapster), which for a product made for active people, would be a deal-breaker.

Design:
Bluebuds are just so itty bitty and I love the foldable (therefore form fitting) wire they have.

Construction:
Tapster. SD-10 seems cheap and the Bluebud's internals come unglued too easy (which needs to be sent in for repair).

Looks:
Love the cool factor of the Tapster touch panel, but what can I say: "smaller is better", so Bluebuds win here.




WORST

Signal:
SD10. Indoors and nearby work just fine. Anywhere else, not so much.

Audio:
SD10. With the "HD" in the name, I expected more. And they are MUCH better than your normal BT dongle. But still, the worst of the three.

Flexibility:
Tapster. Can only sync with two devices at a time. Adding more than that eventually leads to an "unsteady state" and you might end up with a comatose headset which you have to let -completely- drain of battery before you can use it again.

Functionality:
Tapster. Takes a long time to figure out the gesture controls (the instructions are useless) and buggy. Several times I accidentally sent the device into a "unsteady state" where I had to let the battery drain completely (thereby loosing it's state information) before using again.




Summary

Moto SD10 - Cheap, decent.
Jaybird Bluebuds X - Perfect for sports, outdoors
Philips Tapster - Best audio and easy to use, but has it's own equivalent of the "blue screen of death".

My personal choice is the Tapster. I've learned it's quirks and now I get to relax and enjoy the highest quality BT audio available.



Wednesday, September 25, 2013

Firefox modules - Implementing a module, specifically, ContentPolicy

In Firefox, Modules are isolated global environments. Imagine the bastard child of a WebWorker and a CommonJS/AMD import/require statements (seehttp://addyosmani.com/writing-modular-js/).
As stated, Module are global (singleton) for the Firefox application, and are started (run) the first time they are imported into another (running) JS scope. Modules can be registered with Firefox when they start up.
In this example, we'll implement a nsIContentPolicy module, which can be used for intercepting (and blocking) ALL requests made from the browser.

To get started:

1) Create a chrome/modules folder. Create the module file, $(module_name).jsm, where module_name can be what you like.
2) Add to the chrome.manifest, where app_name is the same name used in the manifest already:  "resource    $(app_name)    chrome/modules/"
3) Add this code to where ever you want the module to be accessible from, where scope_reference is where ever you want the exported module objects to be exposed on: "Components.utils.import("resource://$(app_name)/$(module_name).jsm", $(scope_reference));"
4) Inside $(module_name).jsm, you'll need this basic skeleton code. Since the example module will implement nsIContentPolicy, I'll use the name "ContentPolicy":



Note: You must customize the  "classDescription", "classID", "contractID", and QueryInterface list. But, the QueryInterface list must include "Components.interfaces.nsIFactory"
5) Actually add and implement all the methods which you declare that you will implement (in QueryInterface list of Interfaces).
6) EXPORTED_SYMBOLS string array will add all objects on the module's scope (window) into the $(scope_reference) passed into the import call.


Registering the Module with Firefox - If you want FF to USE YOUR MODULE, then you have to REGISTER IT

For registering with Firefox, and for ContentPolicy in particular, we need to add the Module as a listener for certain events. SEE THIRD SNIPPET

Custom for nsIContentPolicy 

Must add these methods to implement nsIContentPolicy: SEE SECOND SNIPPET

Wednesday, September 11, 2013

Found some concerning code today

Going through some client code, I found this little snippet. It appears to be their ad tag. Now, this wouldn't get around browser restrictions on JS or anything, but certainly this would bypass any browser add-ons which might try to remove unwelcome JS. This also wouldn't be seen by crawlers a first party might have to monitor such things *(depending on what the script does).

I'll remove the actual code, but the idea is just to set an error handler, and then cause an error.

<img src="data:imge/png,gotcha" onerror="var cookie=document.cookie; sendCookieToThirdParty(cookie);"></img>

Of course, I describe something malicious, but this could very much be used for legit purposes where the tag owner doesn't want their code blocked by AdBlock or equivalent.

Wednesday, June 12, 2013

Annoying differences in how browsers handle Flash movement

Everyone hates the differences between the different major browsers out there. Not to mention the different versions of those browsers. Supporting all of them is a pain in the ass. And here is just one more reason:

Action: Take a Flash Object element on a page, and then move it someplace else (append the element to a different parent node)
Code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var newCont = document.createElement("div");
var swfOb = document.getElementById("swf-object-element"); //already exists on the page
print(swfOb.lastChild); //<param name="flashvars" value="name=value" />
swfOb.lastChild.value = "name=value2";
document.body.appendChild(newCont);
newCont.appendChild(swfOb);
swfOb = document.getElementById("swf-object-element");
var swfExternalFunction = e.setCorner;
e.setCorner("tr"); //(attempts to) move the corner to the top right (starts in the bottom right)
//In NO browser, does that call actually WORK
//...wait 100ms
swfExternalFunction == null //false IE,FF;  true Chrome
swfExternalFunction == e.setCorner //false!! all
e.setCorner("tr") //Error! IE;  works in FF,Chrome
swfOb.lastChild //<param name="flashvars" value="name=value" />  FF!! ; <param name="flashvars" value="name=value2" />  Chrome, IE (sometimes)

Consequences:
SWF is in the bottom right. The call to "setCorner" does nothing. All browsers. 

Because::
IE: Will not let you call a Flash External Interface function on a moved element. BUT!!!! the function will still exist on the element. Buggars. And I couldn't get a straight answer from IE when modifying the Object Element before/after the location change. IE seems to have some sort of race condition going on. Sometimes it does, sometimes it doesn't, and sometimes the buggar doesn't even allow ANY External Interface any more. More hate for IE. Tested in IE 10, _but_ in IE8 standards mode. I had similar problems in actual IE8 - I could not get consistant results. I need to create better tests for IE, but that's time I don't have.

FF: FF re-inits the SWF, just like Chrome. Any changes to the "old" element will NOT be reflected in the new one. Whats WORSE is that the reference to the element's External Interface function STILL EXIST (on the Object) while the "new" object is being initialized, BUT!!! those functions STILL REFERENCE THE "OLD" OBJECT!! Talk about craziness... 
But wait, I'm not done. Firefox completely IGNORES changes to the object element in the mean time. So you can't, say, change the FlashVars and have them be updated in the "new" Object. Booooooo FF

Chrome: The best of the three. I'd still prefer that the SWF not be re-initialized, but oh well. At LEAST Chrome doesn't lie to you and expose functions that are out of date or don't exist. As well, it recognizes changes to the Object Element and on the "new" Object, those changes are present.



RESULT:

The only sure fire way to move a SWF element is to just set a timeout for after the move, and re-init it then, knowing that all your previous changes are gone. Which means you have to keep state externally, or get the current state from the Object before you move it. There's NO WAY to check to verify that it's "ok yet" to make changes after you've moved it, so you just have to set a long timeout and pray that it's long enough (iff you don't want to make browser specific code, that is).

Thursday, May 23, 2013

How to stop cookies from being dropped on first party page

Originall written May 11, 2012


Goal - To block all non-first party cookies loading on a page.
Starting point - a <script> element loading my script into the first party page somewhere
Tools Used - Chrome Browser
What I know + test results:
DOM parsing is done synchronously and linearly - DOM Nodes are created as the page is "parsed" (ie. in order).
For example, for a script in the header, the "body" element does not exist yet.
Scripts are executed when their DOM Node is created and added to the Document. 
Iframes are loaded when the elements have been added to the Document.
Images are loaded when the NODE IS CREATED, not when added to the Document.
Image nodes are created before the page is parsed - a level one parse if you will, or just a pre-scan. Either way you view it, images are loaded immediately and independently of DOM parsing.
It's possible to replace nodes "under" the current node by setting the "innerHTML" of the parent node. The lower node (removed by your "innerHTML" text is never run (scripts might be loaded, but they wont be run)
Possible Methods:
Method 1: Halt page loading while allowing my own script to run and load an iframe (which in turn loads and runs), and once finished, continue page loading. Can remove elements according to preference.
Method 2: Focus on ?Iframes : replace them with placeholder till user allows the Iframe
Using the rules we know listed up top, we can halt the page from loading whenever we insert our script. Question remains - if we load our iframe will that be allowed to "play" while the parent page is "paused" waiting for our script to allow it to continue. Likely answer is yes. Didn't test.
Method 2 would just replace current iframes (and listen for new ones added later) and wait for a signal from the user to add them back. Method 2 script would have to be placed at the top of the "body" to be effective - it IS location dependent, as other (non-embedded) scripts in front of it would delay it's execution - allowing iframes time to load. External scripts or styles above our script in the body would unacceptably delay execution of our script.
Method 2 blocks only Iframes. Method 1 blocks Iframes + scripts. 
Nothing can block images. Which means the answer to this is a big - "cant do it".

Conclusion:

Possible to block all third party cookies : NO
Possible to block advertising cookies : YES (because can block the ad frame)
Possible to block beacons : NO (unless in Iframes or loaded by script)
Possible to remove other scripts from the page : YES (not explained here, but possible, as long as they are located after our script)
Possible to remove/replace Iframes : YES

Apache CouchDB tests


I was looking into a solution for both log entries and also some simple file hosting solution which would not require logging into the server (http interface). Long ago I looked into distributed DB's, and one of the ones I read about was CouchDB. This is not a "truly" distributed db, but it does have automatic sharding, which I'm still not really sure what that means (jk) but it's apparently important. The one thing that made this DB stand out was it's HTTP API. Meaning, you can run a web site with only the DB - no Tomcat/Nginx/whatever required. This appealed to me, as a concept of simplicity, if nothing else. I assumed that combining your server with your db would be faster than having them separate. So I looked into CouchDB in depth, and this is what I found:

Objects are JSON docs or attachments to those docs (attachments can be anything, generally an image file).
Objects can be served directly as-is, or through a "view" or a "list or a "show".
Direct document acces returns JSON objects of the format {"id": . . . ,"rev": . . . ,"doc": ..the actual doc..}. No flexibility if you don't like that.
"Views" are typically results. In other words, in a sql db, it would be the result set of a query. A view is cached and updated as needed, automatically.
"Shows" are ways to reference single documents and format the response in a custom manner. 
"List" is a way to custom format, combine docs, apply templates, etc - basically a "Show" for a "view" instead of a single doc.
Documents can have "binary" files - images, raw JS, etc. This would have to be used for "fast" script serveing where the script is run-able in a browser.

After some tests, it appears that:
"Shows" are semi-fast. Views are faster since I think they are served direct from the cache.
"Lists" are not too fast, they cache the Views which make the list, but they have to JSON.parse the docs, and then stringify them again for EVERY query. This takes time - not disk access time, but process time, I believe. But because this is actually three steps: get the view, get the JSON object, process the objects, there are three places to bottleneck and apparently they DO bottleneck. Response times vary by an order of magnitude.
"Views" are very fast. It appears it's all cached and ready to serve. Too slow to be mem-cached, but I think that's because I use "include_docs" which makes the DB do lookups for each included doc.
 Didn't test binary serve, but it's fast too.

Performance Tests (LOCAL): getting 12kb
1000 queries, using 5 connections    (in the time in parentheses, the connections was upped to 20)
  • "show" = 1300ms  (1180ms)
  • "list" = 23 seconds. Fastest single request: 14ms, Slowest single request: 220ms  (same)
  • "view" = 450ms  (430ms)
  • direct access = 450ms  (430ms)
  • direct acces as attachment (binary) = 380ms  (same)
  • TOMCAT simple file serve = 450ms   (190ms)
  • ICON SERVER from memory = 200-280ms  (same)
Number of connections are sort of a grey area for me - I'm not sure of the technical ability of my OS or Tomcat or CouchDB to actually process or open N number of connections. So I assume by the numbers, which in Tomcat say that opening 2 connections is no slower than opening 20, that I don't have the whole picture. Or maybe I do - Tomcat FILE serving was greatly increased with more connections - so maybe the Icon server Tomcat instance is just already at a bottleneck somewhere, so adding more connections just doesn't do anything. Opening more connections showed negligible improvement in CouchDB, so obviously either there is bottleneck elsewhere or there's just something completely unknown to me going on. However, opening fewer connections to CouchDB got vastly increased times. What this means is that given the strange result of the ICON SERVER response times being the same for 2 connections as for 20, that result will have to be thrown out. Since that is the ONLY result that is meaningful to me, it's a big hit to the value of this test.
But, in my tests of the live icon servers, I find that serving simple files to ALWAYS be slower than serving from memory, so we can use the simple file serve time to represent (an upper boundary of) the real icon server time. Any way you swing it, the icon server will be twice as fast as a CouchDB. This is to be expected though, as no normal DB will be as fast as a information cached in memory. However, there is vast differences for best case vs worse case.
So, since local tests aren't going to work for me, lets try elsewhere.
Tests QA Amazon Server: getting 12kb for the Couch DB, 9kb for the I.S.
(Icon Server from memory cache)
100 queries, 1 connection:
  • I.S: 18.5s
  • CDB: 17.6s
1000 queries, 10 connections
  • I.S.: 21.5s
  • CDB: 27s
1000 queries, 30 connections
  • I.S.: 13.7s
  • CDB: 22.8s
These results reflect more definitively what the tests on the local machine say - at a high number of connections, the performance of the CouchDB decreases until such a point that increasing connections reduces performance. The Icon Server is getting a smaller file size, 25% smaller than the CouchDB, so the numbers are a little off, but we can see the overall trend. Adding a second or two on the IS times doesn't really change the result.

Summary

This is not a high performance DB. This is a feature specific DB. Would be great for repos or small projects.
These results are actually very positive for CouchDB in certain circumstances. It's half the speed of the current Icon Servers - but the icon servers are speed demons and half the speed of them is actually very good, for a non-memory based DB. CouchDB clusters work like GIT repos - there is no master or slave. They work together to update each other and guarantee eventually consistency. This means that ANY one DB can be updated and the update will spread across the cluster automatically, and every DB is an exact duplicate of the others. If one fails, you loose nothing. If all but one fails, you still loose nothing.
The BEST thing about CouchDB is it's independence from a server. Via it's HTTP API. Development time on Couch DB, **if it suffices for the purpose**, can be DRASTICALLY reduced as compared to the normal JAVA EE stack.

What CouchDB is great for:

Dynamic content. Transactions, or dynamically bound content. Low-medium bandwidth. Version control - it would great for a GIT repo type service, CHAT, transcripts, documents, cloud drive.

What CouchDB would be bad for:

Logging DB. High-bandwidth. Static content.

I'd recommend using this DB for independent (standalone) projects, proof of concept or "testing the waters" projects, or in the cases of the above paragraph "great for" section. In particular to my reason for testing this, I wanted a system for dynamically binding content when it's served. Specifically, I wanted to apply a system of imports and modularization of Javascript files. What I found is that the only way to do this (practically) would be to serve a generic bootloader JS from the Icon Server and then have it make secondary requests to this DB, OR apply a secondary system to the Couch DB server itself - I would have to dedicate a DB connection to a listener which, when updates are applied to key files (modules), would update the resultant files (the served JS). This is exactly what I was attempting to avoid by using CouchDB - extra effort using an extra system.