Archive for the ‘Tech’ Tag
I’ve been hearing this confusion for many days, and decided this needs to be addressed.
What’s this 3G/4G that we keep hearing about everywhere, what does that mean to me??
The main reason for the confusion is the completely illogical and confusing naming convention and ‘branding’ (it sounds almost dirty) used by all the telcos and manufacturers.
Let’s go back to the basics..
1G. When mobile telecommunication started off, it was analogue. All communication between the phone and the tower was just a analogue modulated signal of the speech. This is called 1G. The standards used during these times were, NMT · AMPS · ETACS etc.
2G. With the digital age, this communication also became digital. The voice was digitized and transmitted to the tower as a stream of ‘1’s and ‘0’s .The main advantages of the 2G systems were that they were encrypted (kinda) so you couldn’t listen to other people’s conversations by just catching their signals using an antenna. The digital systems were more efficient (allowed more people to talk at the same time), and they were extensible (we’ll come back to this). Some of the famous 2G standards are GSM, CDMA(IS-95), iDEN. GSM was a part of a standards group called 3GPP and CDMA was part of something called 3GPP2 (Yes, I know it’s not exactly this, but this makes it easier.)
Since they were digital, many of these standards also allowed other digital data to be sent over these mechanism. That’s how they started to send SMS over these networks. That was 2G.
2.5G. A ‘half-step’ towards the next generation wireless telecommunication standards was the more generalized data-network (not restricted to SMS, etc). This brought GPRS (on 3GPP side) and CDMA2000 1xRTT(on 3GPP2 side), which are both data standards for 2G based networks.
2.75G. The 3GPP side decide to improve the data standards on their side of the ground by introducing EDGE (Enhanced Data rates for GSM Evolution). This improved the data rates and speeds of the existing GPRS networks.
3G. This was the 3rd iteration of the mobile telecommunication standards. Now this is where it get’s messy. To have any standard qualify to be ‘3G’, it had to pass some regulations set by the ITU (International Telecommunication Union). But that’s ofcourse not how the telco advertise it.
So both the groups evolved their standards to quality for the new requirements. the 3GPP side came up with UMTS(W-CDMA). The W-CDMA here refers to the technology used, nothing to do with the CDMA standard from 2G. And the 3GPP2 side came up with CDMA2000 1xEV-DO, widely known as EV-DO.
3.5G Ofcourse, we can’t stop at 3G, so there was enhancement to the standards. The 3GPP moved to HSPA(HSDPA and HSUPA), while 3GPP2 side went with EV-DO RevA. These were mainly just speed bumps. Confused?? The guys at commandN did a nice table for everything until 3.5G.
3.75G We can’t stop here either. Even more speed 3GPP – HSPA+ ; 3GPP2 – EV-DO RevB.
3.9G (pre-4G) Now this is where the fun starts, ITU has come up with requirements for 4G. However, the two main standards (WiMAX and LTE) gunning for this can’t make it as of their current versions. But since everyone wants them to be commercialized and available, they decided to release the current versions.
LTE (Long Term Evolution) is the 3GPP groups version of next generation standard. Once again it can’t perform to the requirements of ITU and hence it’s considered 3.9G. The next version LTE-Advanced should be called 4G.
So what does it mean to you??
1. Anyone selling you anything which is ‘4G’ is fooling you.. None of the 4G standards are matured and surely none have been commercialized. But, many telcos are bringing out LTE and WiMAX 802.16e and touting it to be ‘4G’. Or ‘4G-ready’. It’s not true.
2. You will have to change hardware. Yes. None of these technologies are interoperable. Just like GSM/CDMA, the phones and modes supporting these standards will be completely different (might even have very different mechanisms of authentications, for eg, SIM card). And most of the devices you have now that can do 3.5G/3.75G won’t be able to support 3.9G, but the reverse might be possible.
3. Faster mobile telephony coming soon. Yup! This is a given, going forward we’ll surely be seeing more and more devices supporting these new standards. Be careful what you buy. While most will support the fastest networks available currently, you don’t want to be left behind when the next changes comes.
P.S. To learn more about these standards, wikipedia any of these names.
Necessity is the mother of invention it is said.. And sometimes it just takes an annoying repetitive task to psuh someone to do something..
I’ve always been interested in Applescript and Automator. These are Apple’s scripting/automating/batch processing frameworks. Applescript is basically a scripting language which allows you to command many OSX apps. The amount of control you have exert over the running of the apps depends on how the app was made (if they put in the hooks for apple script or not), but most Apple apps are pretty ‘scriptable’. Automator is automation for noobs. Instead of writing a script, you just drag and drop “actions” and create a “workflow“, which lets you pass outputs of one action to another and process them. It seems pretty lame at first, but once you start making your own droplets and ‘workflows’ it’s great fun!
So, during one of my labs, the analyzer we were using was unable to store/save the data we captured during the experiment. It was an old analyzer which used 3.5″ floppy disks, but the disk drive has stopped working. So we decided to take photos of the small screen of the analyzer when it displayed the data, and the transcribe them later.
When I saw the sheer number of file which needed to be transcribed (and also my entire evening gone doing that), I thought of doing some OCR (Optical Character Recognition). Google helped me to find Tesseract, a *nix utility which does OCR. Great. I managed to find a MacPort for it and got it to run on OSX.
OK.So far so good, but Tesseract only accepts 1 file as an input and requires that file to be in .tiff format. Now I could have written a bash (or perl :P) script to convert all the files to .tiff and then loop over all the files and call Tesseract but that’s too much work and surely not the ‘Apple way’. So, I called on Automator.
After a bit of tweaking and testing, here is my final workflow which creates a droplet. Any jpg file dropped on this droplet is duplicated, coverted to .tif and OCRed through Tesseract and the output is stored in a file with a suffix .txt
The OCR output was not the best. I had to massage (crop, rotate, gray-scale, etc) the images to get a good output.
You can download it here, but you’ll need Tesseract to make it work. Yay!
Btw, this was a failed attempt at April Fool’s prank..
Since Daniel has already made the annoucement on Tech65, I guess I have to set things straight here. I joined Tech65 almost 2 years ago, when I met Jerrick and Daniel at the E-Life Starbucks. I thought they were a really great team, and had the potential of becoming the Rev3 of Singapore.
However in the recent days things took a turn for the worse, and there have been many disagreements between me and the rest of the Tech65 crew. Thus, for the following reasons I have decided to leave Tech65.
1. Too much Social Media content. I am sick of people ranting and raving about Social Networking and Social Media. Tech65 was supposed to do Tech Podcasts and not discuss Marketting and PR strategies. I cannot be a part of a team who desecrate the ideaology and purity of Technology with things like Social Media, PR and Marketting. And it doesn’t help when one sees such updates to the team and promises for such kind of content.
2. No respect for Creative Commons and Open Source Content. When I initially joined Tech65, I pushed for our content to be made Creative Commons and Open. I believe in these things more than XKCD, and I finally managed to convince the crew last year. However, in the recent few weeks, un-beknowest to me, they decide to change that and got only the audio and video part of our content published as Creative Commons (you can see it on the website). I find this disgusting. Not only do I believe that we should pay forward the good will and the help we’re getting from everyone, but it’s just disrespectful to change things back without my knowledge.
3. The Tech65’s upper managment’s attitude with respect to finance was the last straw. While it’s great to have an organization which is based of volunteerism and individual interest and passion. But it just sucks when you find out that it’s all a lie. I always believed that Tech65 was running as non-profit organisation. But recently, I have noticed a lot of plugging going on in the content. I got suspicious when the same party was plugged recurrently, for no obvious reason. I am sure Hisham will agree with me on this. More investigation revealed that Tech65 was being paid for plugs!! Obvious question of jounalistic integrity aside, I found it hard that while the upper management was making a great deal of bananas (and I will say it’s a sweet deal for this kind of economy), we were getting paid only peanuts!! Even the weekly coffee at GT, is out of my own pocket..
And thus I have deicided that I will leave Tech65 and let this joke of a Tech Podcasting group be.
I am still devoted to provide great commentry and analysis about new an upcoming technologies and hence will be starting a new project in the near future. Stay tuned on this blog to hear more about it.
Creative technologies, yes yes.. our DEAR Creative announced a couple of weeks ago, their new initiative. It’s called Creative Zii.. with the subtitle “Stemcell Computing” (which they seem to have trademarked).. and the tagline “Everything you know is about to change”.
There were a few more details/speculation which floated around the internet. The first was an email update from Creative, which states..
Being in the consumer electronics industry, it’s really awesome to see your work being used by people… It’s the kind of satisfaction you get when you see your children do well in their school…
Anyway, now that all of them have been officially announced…here are my 11 babies…
1. MotoROKR Z6 – The Small starter
2. MotoRAZR2 V8 – The Large flagship
3. MotoROKR U9 – The belated Pebl
4. MotoROKR Z6w – The WiFi that never was
5. MotoZINE ZN5 – The 5MPixl from the past
6. MotoROKR E8 – The radical Omega
7. MotoROKR EM30 – The Local variant.
8. Motorola VA76r – The rugged Monster.
9. MotoAURA – The round Jewel
10. Moto VE66 – The oldest Brother
11. MotoROKR EM35 – The last Slider
This is a continuation of the Part 1 of discussion on Google Android.
So will we be able see Android on the iPhone, the Nokia N95, or even the Motorola RAZR cellphones? Herein lies the biggest misconception about Open OSs.
Open-source really means that the source code of the OS is available for ALL to be used. So we can read it, port it, re-compile it, program it on some device and do whatever we want with it. But it doesn’t mean we can run it on ANY cellphone. The restriction is not the SW but on the HW. To understand this, we have to look at how Cellphones work.
Cellular networks, on which all cellphones run, require a specific signaling protocol to be used to talk to them (Baseband communications). These protocols include layers of security, error correction, and meta-information to ensure usable communication quality for all. There are also many security mechanisms built into these devices so that users will not be able to spoof as other, or Hijack a signal, or spam the cell tower, etc. And thus, many cellphone OSs (more on this later) till this date have been Closed. That way nobody will be able to “hack” their way into the phone by reading the code and do something nefarious.
With hackers getting better equipped, many cellphone manufacturers started having heavy “chain-of-trust” security mechanism in the cellphone to ensure that nothing else except their own, valid, (and hence not doing anything nefarious) code runs on it. The idea here was to have each component validate and ensure that the next component can be trusted before handing control to it. And this whole chain is “anchored” in some HW, so that every time the cellphone powers on, HW will ensure that the 1st component (say like a BIOS) is valid before start it, and the BIOS will ensure the validity of the OS and so on and so forth. Hence it’s very difficult to run any code on modern day cellphones other than what has been “blessed” by the manufacturer.
Furthermore, these signaling protocols have a strict real-time requirements, and thus need an OS which support Real-Time scheduling to handle this signaling. Linux inherently does not support Real-Time scheduling (There are variants of it which do, like RTLinux, etc) and so it can’t be used to do the Baseband communications. And thus most cellphones will have multiple OSes running together. An Baseband OS (or a Modem) and your usual “SmartPhone OS”. And these two need to talk to each other in order to work and place and receive calls, data, etc. So the “SmartPhone OS” needs to know how to talk to the “Baseband OS”. This is not as easy as it seems, as many times the Baseband OS is propriety and generally Closed to prevent any tampering by the user, especially since many security mechanisms (like Subsidy Lock in the case of the iPhone) are implemented by the Baseband OS.
Hence, there are two problems in getting Android running on any generic phone. 1. Does the HW even allow some other OS image to be programmed and run on the phone? 2. Can Android talk to the Baseband OS and use the phone side of things.
In most phones the HW will only allow the original OS to boot, and no one knows what/how the Baseband OS works and how one can communicate with it. And this is why I feel it will be very difficult to see Android running on other phones, without the HW manufacturer themselves supporting it explicitly.
So is Google Android Open? Surely!! But that doesn’t mean you’ll see it running on an iPhone close to you anytimes soon..
This is a two post discussion on the Google’s Android OS.
There has been much furor recently, about Google’s new Embedded Operating System “Android“. The biggest reasons for the attention are the clout of Google as a player in the IT industry, and the fact that the Operating System (OS) is completely Open-sourced. So let’s look at this “Open”ness of Android and a few misinterpretation people have about it.
Google has basically picked up the Linux Kernel, which itself is under the GPL License (Open-source), modified it to work on Embedded systems, and added their framework on top of it. This frameworks allows users to easily develop application which can run on the cellphone. The idea is once you write an app using this framework, you will be able to run it well on all products which have Android ported to them, a very Java-esque premise. And not surprisingly, most of this functionality is exposed to the developer through “Java”.
Android itself will be release under the Apache License where applicable and under the GPL elsewhere.(Another interesting side note here, is that since the Mobile Edition of the popular Sun Java Virtual Machine, is not completely Open-sourced, Google went ahead and generated their own Virtual Machine and will be Open-sourcing it under the Apache License. You can read more about this here).
So there are two important aspects of Android, which the community at large is excited about. The comprehensive (erm, maybe not so much) and open sourced APIs, or libraries, for the developers to make their applications. And the OS itself which is open. And these two are NOT the same. The benefits arising from these are also NOT the same, and in fact not at all related.
An Open API for App development allows the community to add new functionality to the libraries given by Google. For eg. “Display refresh API takes too long to execute? No worries, we can look at the code, and optimize it and generate a new library which is faster and use that”. More realistically, sometimes it helps to look under the hood into the libraries and API to see exactly what they are doing to be able to use them wisely and properly.
The Open-sourced OS itself is a whole different ball game. OSs, especially Embedded OSs, are generated to run on specific Hardware, a specific platform, for eg. an ARM9 or an PowerPC, or an x86 processor. And a specific operating system image will not run on another platform unless it is compiled for it. So having an open OS will allow the community to make ports of Android to various platform. We have already seen this with the port of Android to the N810 HW, and cellphones or mobile devices are not the limit. In the latest episode of Tekzilla, Patrick Norton talks about how Android would also be useful in Set-top Boxes, and all other Embedded devices which require user interface.
Also, an Open-sourced OS will allow the community to hack, fix, tune and tweak the OS to make it do whatever they want. Things like enabling new functionality, protocols, supporting new peripherals, or improving the performance for certain types of applications, etc.
So the golden question is, will we be able see Android running on the iPhone, the Nokia N95, or even the Motorola RAZR cellphones? Herein lies the biggest mis-conception about Open OSs.
Check out Part 2 of this article for the rest of the discussion on Google Android and Cellphone OSs.
Warning : Super technical, geeky post ahead!
I was watching Episode 52 of Systm, and Patrick was trying to hack/mod an iPod/iPhone connector to allow it to charge using various sources other than the USB port of your computer. He also wanted to get the line out audio from the connector to connect to any Audio System.
Aside : Systm has just revived into the awesome show it started as. Patrick did one on distillation, two on iPod/iPhone hacking/modding and possibly more coming up. Do watch it if you are interested in this type of stuff.
So, Patrick was able to power up and charge the iPod (a 5th Gen iPod, I believe) by connecting just the VDC and GND pins of the connector to a 5V supply (possibly a VCC pin on a USB cable) and GND respectively, the same did not work on the iPhone. Why??
For this we need to look at USB’s electrical specs, especially the power sinking and supplying parts.
There are 3 classes of USB hosts and 3 classes of USB devices/functions defined in the specs. Hosts are devices like PCs, USB hubs, etc, which can connect to multiple devices and act as the “Master” in most transactions. Devices are “Slaves” in a USB connection. All cameras, phones, thumb drives, etc are considered as devices.
The power source and sink requirements of different device classes can be simplified with the introduction of the concept of a unit load. A unit load is defined to be 100 mA. A device may be either low-power at one unit load (100mA) or high-power, consuming up to five unit loads (500mA). All devices default to low-power. The transition to high-power is under software control. It is the responsibility of software to ensure adequate power is available before allowing devices to consume high-power.
So the 3 USB Host Classes are..
Root port hubs: These are directly attached to the USB Host Controller. Systems that obtain operating power externally, either AC or DC, must supply at least five unit loads to each port
Bus-powered hubs: Draw all of their power for any internal functions and downstream facing ports from VBUS on the hub’s upstream facing port. Bus-powered hubs may only draw up to one unit load upon power-up and five unit loads after configuration.
Self-powered hubs: Power for the internal functions and downstream facing ports does not come from VBUS. However, the USB interface of the hub may draw up to one unit load from VBUS on its upstream facing port to allow the interface to function when the remainder of the hub is powered down.
And the 3 USB Device Classes are..
Low-power bus-powered functions: All power to these devices comes from VBUS. They may draw no more than one unit load at any time.
High-power bus-powered functions: All power to these devices comes from VBUS. They must draw no more than one unit load upon power-up and may draw up to five unit loads after being configured.
Self-powered functions: May draw up to one unit load from VBUS to allow the USB interface to function when the remainder of the function is powered down. All other power comes from an external (to the USB) source.
So basically, there are hosts that can supply up to 500mA and devices which can sync up to 500mA. However, all hosts and devices will default to supplying and sinking (respectively) 100mA. And to switch to the high power mode, the device software (USB Stack) has to request the host to start providing 500mA. This request has to be done through the Default Configuration Pipe of the connection, using “SetConfiguration” USB Device Request. In this configuration, the value of maximum current, in mA, can be set. However, these requests go over the D+ and D- USB pins, since all data goes over those pins. And so, these pins have to be connected to allow the Device to request 500mA.
Now that explains why many times our devices just can’t charge over those USB Wall-Chargers, because they request and expect 500mA charging configuration, but since the USB Wall-Chargers are not “hosts”, and don’t have any micro-Controller or any USB Stack running, they cannot reply to these requests. For that matter, most of them, just like Patrick’s charger, don’t even have D+ and D- pins at all. And since the device can’t get 500mA, it can decide it can’t charge and stops.
So now, finally the question is, why can an iPod charge on 100mA (I have tried it using a USB Wall-Charger, it works), but the iPhone can’t.
The answer is simple, iPhone has a GSM radio for the cellular telecommunications. And GSM radios draw huge amounts of currents when they broadcast the data in bursts. So during these bursts, they may sink in the range of 70mA. So if you have a flat battery, and a charger providing only 100mA, and a radio sinking 70mA, only ~30mA is left for the other components including the touch screen and the backlight. Running on low current might cause many issues, especially on displays and touch sensors. So Apple must have decided to only charge the iPhone, (and in the case of low battery, allow it to boot-up), if 500mA configuration has been successful.
There you go, some interworking of USB and Charging on devices.. Drop me an email, or comment on this post, if you have any questions or queries about these things, and I will try my best to answer them..
Class, this is your assignment for today. YOU have to read through all the stories below, and prepare for the discussion during tomorrow’s live recording. If you are unable to answer our questions tomorrow, you WILL BE made the the outstanding student (the student standing outside the class)…
We expect all students to be badly behaved, and participate in the discussions impolitely and rant about things… You will be graded on your performance.
1. April Fools Panks on the Interweb.
Ping.sg’s “Whose pong is it anyway” game
2. Unlimited Tunes from Apple? Not So Fast
3. Vista Issues
4. Creative to sell company HQ for $180m
5. Creative Goes After Driver Modder
6. Eeee PC Updates: New models and SDK
7. Fari and Jerrick’s Experience at M$’s Xbox 360 Event.
8. Byte of the Week : Adobe Photoshop Express
For those who missed the other posts, the details of the live recording are follows.
65th Episode of 65Bits Podcast.
Date: Saturday, April 5, 2008
Time: 10:00am – 12:00pm
Location: Geek Terminal
Street: 55, Market Street (Near Raffles Place MRT, opp. Golden Shoe Complex)
City/Town: Singapore, Singapore.
As you might know, I have never been fond of the iPhone. Crazy as the geek community might be about this device from Apple, it never appealed to me. Even with all the fanbois saying that you have to touch it to feel how wonderful it is, which I did, I was never attracted to it. It was just too much for the simple funtionality which I require from my cellphones, and required too many bananas to own one..
But after listening to a recent episode of TWiT, and the constant argument by Dan from Tech65 that iPhone’s best feature is it’s full fledged browser, I realised that maybe it did have a place in my world..
It all begins with RSS. Dave Winer, the guy who pioneered RSS was on episode 134 of TWiT (yes, yes, the infamous one) mentioning how iTunes’ use of RSS, for podcasts, is nowhere near the potential that RSS provides. One of Dave’s envisioned use of RSS for podcasts would be to have a mobile device that would allow the aggregation of Podcasts wirelessly (~ 40mins from the start).
That led me thinking, why do we have to tether the iPhone to a PC/Mac to load content onto it? While it is of course a better solution from Apple’s perspective, as they have more control over what gets into the iPhone, “valid” content should be download able over the air. And podcasts are surely one of the best example of “valid” content. Not only that, the mechanism of getting those, RSS, is also something that’s already supported on the iPhone. So why can’t we have an iTunes or a podcast aggregator on the iPhone itself?
Moving on, podcasts might not be everyone’s cup of tea (Though I would like to argue otherwise, we will leave it for some other time), but news is. And RSS and feeds are surely the best way of getting news. And more specifically getting news you are on the go. But RSS isn’t so much a content delivery mechanism as a notification mechanism, at least the way it is uses in these days. So what is the use of having an RSS feed without a browser to actually surf the content which you were notified of by the RSS?
So now, with the iPhone SDK released, adding these functionality to the iPhone would be a matter of making the right app. A podcast aggregator or podcatcher which lives on the iPhone and is able to download the podcasts as the come out directly to your iPhone, using Wifi or cellular data whichever is available. And an RSS aggregator which is able to work with Safari, which is a full fledged browser (albeit no flash support) to give you the complete experience of news over RSS. And furthermore, if you are able to integrate these apps even more, you could hyper link the “show notes” delivered with your podcasts, something that’s already doable with enhanced podcasts, and surf them up, in the browser, while you listen to the podcasts in the background. There you go, your complete podcast experience. Place shifting along with the rest of the experience.
So maybe, the iPhone could be the key to podcasts becoming bigger. Anyone interested in developing these iPhone apps?