PC-Doctor Blog

The Global Leader in PC & Android System Health Solutions

Year: 2007 (page 1 of 16)

Interface Design: APIs as a User Interface

This article closely parallels Interface Design: World of Warcraft vs. Excel. In that article, I claim that UI designers can learn from game designers. Now I’m going to claim that library authors can learn from UI designers.

Library authors write code that is used by other programmers. Their interface is called their API, and, like a GUI, it is all the user has to see in order to use the product. An API doesn’t look much like a GUI, of course. It is a highly specialized UI that can only be used by someone trained in their use. Continue reading

17″ Toshiba Satellite Review

It’s been six months since my purchase of a 17″ Toshiba Satellite P100. Before purchasing this laptop I spent many months researching the different manufacturers and quizzing the guys in our QA lab. They get to see lots of hardware, as well as complete systems, and are a great resource for info when purchasing a new system and/or hardware. I looked at all the major players (in no particular order); Lenovo, Dell, Gateway, HP, IBM, Sony, and Compaq. I even looked at a few of the smaller manufacturers like Acer and Alienware.Toshiba Satellite Continue reading

Books that all Programmers Should Read

I read a lot of programming books. However, I haven’t read many that all programmers should read. Programmers do a lot of different things, and it’s pretty darn hard to write something for all of them.

In fact, I can only think of two books. If you know of another, I’d love to hear about it.

How to Write Code

All programmers write code. Furthermore, writing code is a skill that transcends language. If someone can write really good JavaScript code, then they probably understand most of the principles in writing Scheme code. The skill does not depend on language. Continue reading

Inside the Amazon Kindle

By now you might have heard of the Amazon Kindle, the e-paper based reader, that allows wireless access to a vast library of electronic books from Amazon, and some selected blogs, newspapers, and such. There is even “experimental” access to the internet, though it’s not very powerful, and is geared around static data only. I won’t rehash more of the publicly available details on the unit, but give an inside peek at the guts of the unit.

One of the things that I noticed right away is the limited set of file formats that Kindle supports. Glaring omissions were the popular PDF format, and all forms of images, such as JPG and TIFF. Amazon provides a service that can be accessed for free and for fee, that converts image files to Kindle format. However, the service seems a bit slow, and requires the the file must be e-mailed to be converted.

To see what it can do, I used the service to convert several image files. The result was an image file that apparently had a maximum size and maximum resolution. An analysis of the files that were produced showed a header, followed by some HTML code, and then a binary image file. What appeared to be some closing bytes appeared at the end of the file.

With a hex file editor I cut&pasted the obvious header and footer, and placed them around some JPG files of my own. The resulting combinations did appear in Kindle as files, and were even partially viewable as images.

Kindle, however, seemed to refuse to display images above a certain size. While I did not determine the exact cut off point, it seemed to be somewhere around 64kB. An image larger than 64kB would only display correctly for information in the first 64kB. The remaining pixels were shown in a uniform pleasant gray.

I can’t tell if the behavior is an intentional limitation in the Kindle software, a CPU-driven feature (e.g. 16-bit registers), or caused by memory constraints. I wanted to find out, hence I had no alternative to popping the covers on the unit.

The unit is relatively easy to open. There are 8 screws, all accessible without peeling labels. The corners of the plastic case have tabs with a very firm grip, but they will pop with an even pull on the side while pushing the top cover in the opposite direction. (Kids, if you have never done this before, or if you are worried about having a broken Kindle, don’t try this at home.)

The interior of the unit was not exactly crowded. There is one main circuit board, with attachments for the keyboard, the displays, main power switches, scrolling mouse and the SD card adapter.

The AnyData DTEV-DUAL cellular modem is connected using a board-to-board connector, but its shell is permanently soldered to the main circuit board. It seems to have two antennas, one on the top, and one on the bottom right side of the unit.

The main microcontrollers are a Microchip PIC16LF874A, NXP ISP1761BF and Intel PXA255. The purpose of the PIC part is unknown, but a hunch based on its location is that it provides handling for the scroller and the keyboard. The NXP part is a USB-On-The-Go controller, which means that it can also function as a USB host, even though the Kindle only supports being a USB client device. The Intel part is an X-Scale ARM processor, which likely is the main processing unit for the Kindle.

Audio is handled by a Wolfson Microelectronics WM8971 Stereo codec that seems to be driven by another small PIC microcontroller marked “6282E/7270SH”. The latter could also be memory or a voltage controller, but I had insufficient time to determine its type.

Memory on the unit is present in two Infineon 256 megabit Mobile-RAM parts, giving the PXA255 a total of 64 megabytes of RAM, accessible over a 32-bit bus. There are two Samsung K6F4016U6G chips that provide 1 megabyte of SRAM, accessible over a 32 bit bus, which are either working memory for the NXP part, or more likely video memory. Firmware is stored in a Spansion 512 megabyte boot-sector Flash, type S29AL004D90BFI01.

The display controller is the same as on the Sony electronic book series, a part marked “9322 571 0032 1 Apollo 1.18 T6TW8XBG-001”.

The only part that remained a minor mystery is marked KFG2G16Q2M-DEB8 from Samsung. It is located next to two ADG3247 bus switches. It appears to be a 2 gigabit Flash part (256 megabytes). Its proximity to the bus switches might mean that these provide for sharing of the Flash between the NXP part when USB is connected to a PC, and the Intel PXA255 in the normal operating mode.

The last parts worthy of mention are a LM75A and a LTC3455. The LM75A measures temperature, and is likely used to avoid overheating the battery and the cellular modem. The LTC3455 provides battery power management, charging, and DC-DC conversion to change the variable voltage of the battery to a fixed voltage for the internal electronics.

Conclusions

Now that I know what’s inside the unit, it’s easier to say where limitations to functionality are intentional and where they are not.

First, there appears to be no good reason for this system to not handle large images. The CPU thinks in 32 bit, there is plenty of memory, and video output is not generated on the fly, but stored in SRAM. Hence, the limit is arbitrary.

This system should also easily be able to handle compressed and encrypted PDF files, making that limit arbitrary as well.

The USB-OTG controller is interesting, as it means that the unit might have future applications beyond what’s apparent.

Hacking Opportunities

I’m not about to start hacking my Kindle, but it’s apparent that serious hacking opportunities are present. The PXA255 is a common MCU, and its firmware comes from an external flash. The file system is likely on the other flash. And access to all is likely to be easy, since the board seems to have JTAG connectors and connection points for programming the various microcontrollers and memories. One of the connectors is even accessible while the case is closed, just by popping off a small access panel next to the battery.

If you end up hacking your Kindle, please let the world know what you find!

What’s that Data Structures Class for?

I assume that when computer science students go through college, they all take a required course in data structures. If I were designing a course like this, I’d make them learn how a variety of useful data structures worked. Certainly if you read a book on data structures you’ll learn this sort of thing.

How many programmers actually need this information? In today’s world, there are a lot of libraries out there that have a reasonable implementation of AVL tree, hash table, and B tree. Certainly some people need to learn to write these things. Why does your typical student programmer care about different ways to handle collisions when inserting into a hash table?

Okay, I’ll admit that programming without this knowledge for me would feel a bit like skiing naked. It’d be a bit too weird to choose an algorithm because someone told me that I should always use AVL trees because their worst case performance and therefore their predictability in the face of ignorance is better. For me, I’d rather be able to use a sorted array to improve locality of reference even if I don’t know that there’s a problem anywhere. I’m sure that at least 95% of the time that I’ve used an immutable sorted array it hasn’t made a difference. I certainly don’t check with a profiler unless a real problem exists.

Every so often performance does matter, though. It sometimes matters a lot. In that case, you might say that you need a programmer who knows to check the hash tables to make sure they aren’t behaving horribly. However, a decent profiler is likely to tell you a lot of useful information without having any knowledge of data structures. Since these cases are rare for typical programmers, wouldn’t they be just fine if they knew a collection of data structures that they could swap in until they got something that worked better?

I can’t remember many times when I’ve had to swap out a data structure because of performance issues. The last time was inserting into the end of a very large STL vector. The copies were too expensive due to C++’s tendency to copy things too often. (Even this case will be fixed with the next C++ standard.) Anyway, STL has some other data structures that can be used. I was able to replace my data structure and things immediately improved. I can’t remember enough details to know what knowledge I needed. It’s also possible that I guess correctly more often than an ignorant programmer would. Who knows how many times they might need to swap things randomly?

A C# programmer would have it even easier. The System.Collection namespace in .NET doesn’t have a lot of different options. It’d be pretty easy to try all the possibilities pretty quickly. If none of the options solves the problem, it’s entirely possible that there’s something that could be done elsewhere.

Memory and speed performance are pretty much the only times you might care about the differences between a hash table and an AVL tree. A few years from now, application programmers may just add a bit of concurrency if they want more speed. Few web applications run low on memory. Are data structures classes useful anymore for typical programmers?

I’ve left a lot of unanswered questions in this post. I’m really curious about the answers. I’d love to hear from you.

Older posts