Vintage subproduct from 1972
9 stories
·
5 followers

Three Of A Kind? Fuji X100 vs Sony RX100 vs Ricoh GR

1 Share

SONY DSCSONY NEX-5 (27mm, f/6.3, 1/125 sec, ISO200)

When traveling to foreign places people always recorded their memories. Today it is the same. Almost everybody takes images when abroad. The only thing that has changed over time is the camera. Everything from a smartphone to full frame DSRLs with a back bag full of lenses and a tripod. What I have seen so far is that smartphone photography is growing, tripods are declining.

There is no question why tripods are declining. Before digital many of us shot slide film. Fuji’s Velvia was one of the most popular films. With a real sensitivity of 40 ASA and a lack of image stabilization lenses it was clear that a tripod was needed to get sharp images in everything but bright daylight. Now with clean ISO 800 and 4 stop IS lenses there is no need to bring a tripod. Even the slightly blurred waterfall ( 1/15s ) can be achieved by relying on the image stabilization of the standard zoom. On my last trip to the USA I did not bring my tripod for the first time and I didn’t miss it.

There is also no question why smartphones are rising. Everybody has one. It is easy to upload to your Facebook or Flickr account or twit or send an email. Your friends see your picture right after you took it. The images are organized automatically by date or location and there is a fully automatic backup somewhere in the cloud. No need to mess around with the files on a computer. Very convenient. But it comes with a high price: Image quality is anything but good.

I’m sure some will regret that they took their images only with a smartphone instead of using a real camera.

So what to do if a smartphone is not an option and a DSLR is just to heavy and complicated? A couple of years ago the only option was a small digital camera with a sensor hardly bigger than the one in our smartphones. Thanks to Olympus that has changed. Small cameras with big sensors are the perfect solution if you want to travel light but image quality is still important to you.

There are so many options today but one of the most consequent solutions is to get a small camera with a big sensor and an integrated lens.

Here I want to compare three of them:

  • Fuji X100
  • Sony RX100
  • Ricoh GR

Three cameras that have two things in common: big sensors and integrated lenses ( X100 and GR have prime lenses, the RX100 has a “28-100mm” zoom ) but apart from that are completely different. The picture above is misleading. The Fuji X100 is much bigger than the other two.

I will post only some images taken with the GR. There are plenty of RX100 and X100 images on my blog and thousands on the internet so I decided to show some pictures taken with the GR. I put them at the beginning of every chapter to make navigation easier if you want to skip i.e. the one that deals with images quality. This one:

Image quality:

R0001352RICOH GR (18.3mm, f/5.6, 1/1500 sec, ISO100)

Let’s start with images quality first. Good news first: All three of them deliver an image quality that is clearly better than what you get from any APS-C DSLR camera that is older than 5 years. That includes cameras like a Nikon D2X or in other words. What you get here is superior to what professional photographers used to shoot a couple of years ago.

So even the “worst” here, the Sony RX100 gives excellent images in good light and still very impressive images up to ISO 800. The other two are even better thanks to their much larger sensors. But there are other things to consider regarding image quality.

If you want to shoot JPEG and don’t want to post process your shots the otherwise great Ricoh GR is not an option. The Sony RX100 is very good in good light and the Fuji X100 is impressive.

The Sony and the Fuji have very reliable metering systems that get exposure right most of the time. The Ricoh tends to underexpose quite a bit and needs exposure compensation very often.

Ricoh’s Auto-WB is the weakest. Colors are too cool outdoors, it practically “kills” sunsets. The Auto-WB on the Sony is good but errs on the cool side too especially in shade but to a lesser extend than the Ricoh. Fuji’s WB on the other hand is the best in the industry. I shot Nikon and Canon DSLRs and a lot of other cameras but Fuji is clearly a step ahead of all the others. It almost never fails even in the most challenging light. Fantastic!

For the JPEG shooter the Fuji is the best option. The Ricoh GR can not be recommend because of it’s erratic metering and the Auto-WB that reminds me on the old days of digital photography.

If you shoot RAW, which you should with this type of camera all three deliver better image quality. The Fuji gains less than the other two. The Sony RX100 results are a lot better in low light high ISO images. The Ricoh gives you the chance to correct AWB and exposure and delivers truly impressive, crisp images that are ahead not only of the Sony but also of the Fuji. The 16 MP sensor without low pass filter combined with a lens that is very sharp even wide open creates images comparable to a full frame camera. Crisp and sharp even when viewed at 100% in LR.

Summary: All three gain from shooting RAW but if you still insist in shooting JPEG the Fuji is clearly the best.

How do they feel in hand?

R0002421RICOH GR (18.3mm, f/4.5, 1/50 sec, ISO100)

Image quality wise all three are excellent. Of course the bigger sensors and the prime lenses result into better image quality of the Fuji and Ricoh but you don’t need to worry about the image with the Sony. It is so much better than what you get from a compact digital camera and in good light is is very close to the others.

The bigger difference is in the handling. The Fuji X100 is the biggest of the group and it handles best by far. This is no surprise. Unless you got very small hand a DSLR would be best, followed by a big mirror less camera like the Fuji XP1 followed by something like the Fuji X100.

The Ricoh handles very, very good too. It’s just a little bit longer than the RX100 but this 1 cm makes a big difference. That and the rubber grip. This camera sticks in you hand. Very safe one hand operation. The best of the three in this regard. Has the right amount of knobs and the most useful mode selector in the history of digital cameras. I will come back to that.

The Sony is the smallest and it’s too small for me. The slippery surface might look high tech and cool but it doesn’t help when holding the camera. I bought the original case. This overpriced, fake leather full case improves holding the camera a lot. It’s still not perfect but it works.

Controls:

R0002816RICOH GR (18.3mm, f/4, 1/250 sec, ISO100)

Fuji is best here. Old fashioned shutter speed dial, a very beautifully made aperture ring and exposure compensation. There is nothing else. Pure photography. Wonderful. No scene programs and no gimmicks. A great concept. And it’s success shows that a lot of people were waiting for such a camera.

And there is this fantastic hybrid viewfinder. When I saw it first I was amazed. And here it makes perfect sense. There is just one lens so the optical viewfinder works just perfect. On my X100 I use the OFV on a regular base. On my XP1 I gave up to use it since I got the 14mm lens.

To sum it up: The X100 gives you a shooting experience that is hard to find. It’s completely  different from a DSLR but it is truly great.

The Ricoh is much smaller. There is no aperture ring and no shutter dial or exposure compensation ring but it still handles fantastic. The reason: Ricoh optimized the GR concept now for many, many years. And this knowledge shows.

I pre-ordered mine and was ready to buy it but at first hands on the low weight put me off. The camera almost felt hollow. But what I realized immediately was the interface. What a relief. Everything is at the right place and the shooting modes! But after a first check of the Auto-ISO setting and was disappointed. No way to set the minimum shutter speed! I ranted about that in my RX100 review and in my XP1 review. It is just one bloody line in a menu. Nothing more! Still some camera makers just don’t get it.

But than I realized that Ricoh provided a perfect solution. Their own TAV mode. And this is how it works: The camera let’s you set shutter speed and aperture manually. You can still use Auto-ISO and now comes the best – exposure compensation still works. Perfect! Of course there is A, T, M, P-Mode, Pano- and Video Mode and best of all three MY settings. For the first time in my live the Mode dial is not just a waste of space. On my Canon 6D I only switch in between A and M – Mode. The whole dial makes no sense to me. On the GR it is extremely useful and it even is locked! No accidental changes!

Even the exposure composition works very good. No additional knob to push. Just dial in plus or minus that’s it.

Compared to the other two the RX100 is clearly less enjoyable to use for experienced users. I think the RX100 works best if you set it to one of the auto everything modes, one of the scene modes or on P and leave it there. Don’t get me wrong. The Sony is not a bad camera. It is far from it. It’s just that when you leave the auto modes behind the camera is kind of awkward to use. It’s customizable but I never managed to make it work for me. Regarding user interface it feels like a compact camera.

On the other hand it is the first camera out of the three that I would recommend to people who just want to take good pictures without dealing with technical details. It has face detection AF, two Auto everything modes and a lot of scene modes. It has it’s user group. I just found out that it is not me. For more details there is a very long review of the RX100 on my blog. If you consider the RX100 because of the zoom be warned. The lens is very slow when zoomed in. That means that ISO will go up like crazy. And if you consider IS ( image stabilization ) to be an advantage be warned again. It’s not very efficient on the RX100. I can shoot the X100 at lower shutter speeds than the RX100.

AF performance:

R0000574RICOH GR (18.3mm, f/4, 1/1000 sec, ISO100)

Today the most important thing seems to be fast AF. I’m old enough to know from personal experience that there was a time before auto focus. And the pictures were sharp too maybe even more so than today. If fast AF without compromise is most important I got really bad news for you. You need to get a Canon 1DX or Nikon D4. Expensive, big and heavy monsters but truly outstanding performers. Everything else is a compromise.

If you don’t want to spent that amount of money or don’t want to lug around pro DSLRs there is still hope. Even an entry level DSLR like a Canon 700D focuses faster than any mirror less or big sensor compact camera. If you are looking for a camera to take pictures of your kids running around – get a DSLR!

Here we are talking about AF for static subjects only ( I chose the boat picture to make that clear enough ). Now who is best out of the three? In good light there is not much difference. They all focus fast enough thanks to their fast lenses. In low light they act completely different. The GR gets extremely slow and sometimes refuses to focus at all. It reminds my on the Fuji before the firmware changes, maybe it is even worse. To use the GR in really poor light is a pain. The only option is to set it to snap focus and guess the distance.

The Sony is not a low light hero either. In real low light it acts similar to the Ricoh. Maybe a little bit better but without the chance to switch to snap focus it is even less useful when light is really low.

The Fuji X100 was a very poor performer too but a couple of firmware updates have transformed the camera. Now it is very, very good and easily the best of the three in poor light. Fuji has done a great job here. I’m sure the X100s is even better but I would try that first. Even the X100s is not an action camera.

Summary:

R0004145RICOH GR (18.3mm, f/4.5, 1/250 sec, ISO200)

All three camera have two things in common: Rather big sensors in small bodies and integrated lenses. Apart from that they are completely different. I hope that I was able to give you an idea of how it feels to shoot with these cameras.

If I sounded too negative on the RX100 don’t get distracted. It’s a fantastic camera it just doesn’t work for me. But that doesn’t mean that it will not work for you. For people who don’t want to deal with the technical part of photography this camera is the best of the three. It delivers great results in auto modes and has one of the most reliable exposure metering systems. Which one to get? The “old” RX100 or the RX100 II? There is a huge price difference but the tilt screen is hard to ignore. Value for money of the new one is, how to put it gently, not the best but this is true for the other two as well. They are perfect examples of economy of scale.

Maybe I was too positive about the GR because I enjoy it’s interface and the shooting experience so much but it is hard to ignore that the Ricoh has some serious drawbacks: exposure meter and WB issues, lifeless JPEG output. Pictures need post processing! But to have an APS-C sensor camera in my pocket is something truly special. It is the most discrete and unassuming camera there is. Everybody will think that you are using a cheap compact camera as long as you resist the temptation to put the ugly and big lens shade or fancy straps on it. I use lens shades all the time but the GR needs it’s lens shade as much as a fish needs a bicycle. Because of it’s unassuming looks and the snap focus function this is a street photographers dream camera.

The Fuji X100 is the camera I used the most. I loved it from the first time I picked it up because of shooting experience and output. It has been vastly improved during it’s lifetime with various firmware updates. It’s the most complete of the cameras here but it is also not perfect. Compared to the others the X100 is not pocketable. I can put it in the pocket of my winter jacket when traveling but it is big and heavy compared to the other two. I love the “35mm” lens because it is more versatile and can be used for portraits too if you don’t get too close. I use it a lot less since I got the XP1 and the great 14mm lens but I will never sell it. A camera to keep. X100 or X100s? I think I would still take the X100 because of white balance and colors.

I know that in some places it is not possible to try them side by side in a shop but I strongly recommend to get them in your hands before you decide. They are completely different and there is a strong chance to get it wrong if you base your decision on reviews and second hand experiences. Regarding IQ you can’t go wrong as long as you shoot RAW but regarding handling you need to decide which one fits you best.

Read the whole story
pantulis
3828 days ago
reply
Alcorcón
Share this story
Delete

Who will upgrade the telecom foundation of the Internet?

1 Share

Although readers of this blog know quite well the role that the Internet can play in our lives, we may forget that its most promising contributions — telemedicine, the smart electrical grid, distance education, etc. — depend on a rock-solid and speedy telecommunications network, and therefore that relatively few people can actually take advantage of the shining future the Internet offers.

Worries over sputtering advances in bandwidth in the US, as well as an actual drop in reliability, spurred the FCC to create the Technology Transitions Policy Task Force, and to drive discussion of what they like to call the “IP transition”.

Last week, I attended a conference on the IP transition in Boston, Boston, one of a series being held around the country. While we tussled with the problems of reliability and competition, one urgent question loomed over the conference: who will actually make advances happen?

What’s at stake and why bids are coming in so low

It’s not hard to tally up the promise of fast, reliable Internet connections. Popular futures include:

  • Delivering TV and movie content on demand
  • Checking on your lights, refrigerator, thermostat, etc., and adjusting them remotely
  • Hooking up rural patients with health care experts in major health centers for diagnosis and consultation
  • Urgent information updates during a disaster, to aid both victims and responders

I could go on and on, but already one can see the outline of the problem: how do we get there? Who is going to actually create a telecom structure that enables everyone (not just a few privileged affluent residents of big cities) to do these things?

Costs are high, but the payoff is worthwhile. Ultimately, the applications I listed will lower the costs of the services they replace or improve life enough to justify an investment many times over. Rural areas — where investment is currently hardest to get — could probably benefit the most from the services because the Internet would give them access to resources that more centrally located people can walk or drive to.

The problem is that none of the likely players can seize the initiative. Let’s look at each one:

Telecom and cable companies
The upgrading of facilities is mostly in their hands right now, but they can’t see beyond the first item in the previous list. Distributing TV and movies is a familiar business, but they don’t know how to extract value from any of the other applications. In fact, most of the benefits of the other services go to people at the endpoints, not to the owners of the network. This has been a sore point with the telecom companies ever since the Internet took off, and spurs them on constant attempts to hold Internet users hostage and shake them down for more cash.

Given the limitations of the telecom and cable business models, it’s no surprise they’ve rolled out fiber in the areas they want and are actually de-investing in many other geographic areas. Hurricane Sandy brought this to public consciousness, but the problem has actually been mounting in rural areas for some time.

Angela Kronenberg of COMPTEL, an industry association of competitive communications companies, pointed out that it’s hard to make a business case for broadband in many parts of the United States. We have a funny demographic: we’re not as densely populated as the Netherlands or South Korea (both famous for blazingly fast Internet service), nor as concentrated as Canada and Australia, where it’s feasible to spend a lot of money getting service to the few remote users outside major population centers. There’s no easy way to reach everybody in the US.

Governments
Although governments subsidize network construction in many ways — half a dozen subsidies were reeled off by keynote speaker Cameron Kerry, former Acting Secretary of the Department of Commerce — such stimuli can only nudge the upgrade process along, not control it completely. Government funding has certainly enabled plenty of big projects (Internet access is often compared to the highway system, for instance), but it tends to go toward familiar technologies that the government finds safe, and therefore misses opportunities for radical disruption. It’s no coincidence that these safe, familiar technologies are provided by established companies with lobbyists all over DC.

As an example of how help can come from unusual sources, Sharon Gillett mentioned on her panel the use of unlicensed spectrum by small, rural ISPs to deliver Internet to areas that otherwise had only dial-up access. The FCC ruling that opened up “white space” spectrum in the TV band to such use has greatly empowered these mavericks.

Individual consumers
Although we are the ultimate beneficiaries of new technology (and will ultimately pay for it somehow, through fees or taxes) hardly anyone can plunk down the cash for it in advance: the vision is too murky and the reward too far down the road. John Burke, Commissioner of the Vermont Public Service Board, flatly said that consumers choose the phone service almost entirely on the basis of price and don’t really find out its reliability and features until later.

Basically, consumers can’t bet that all the pieces of the IP transition will fall in place during their lifetimes, and rolling out services one consumer at a time is incredibly inefficient.

Internet companies
Google Fiber came up once or twice at the conference, but their initiatives are just a proof of concept. Even if Google became the lynchpin it wants to be in our lives, it would not have enough funds to wire the world.

What’s the way forward, then? I find it in community efforts, which I’ll explore at the end of this article.

Practiced dance steps

Few of the insights in this article came up directly in the Boston conference. The panelists were old hands who had crossed each other’s paths repeatedly, gliding between companies, regulatory agencies, and academia for decades. At the conference, they pulled their punches and hid their agendas under platitudes. The few controversies I saw on stage seemed to be launched for entertainment purposes, distracting from the real issues.

From what I could see, the audience of about 75 people came almost entirely from the telecom industry. I saw just one representative of what you might call the new Internet industries (Microsoft strategist Sharon Gillett, who went to that company after an august regulatory career) and two people who represent the public interest outside of regulatory agencies (speaker Harold Feld of Public Knowledge and Fred Goldstein of Interisle Consulting Group).

Can I get through to you?

Everyone knows that Internet technologies, such as voice over IP, are less reliable than plain old telephone service, but few realize how soon reliability of any sort will be a thing of the past. When a telecom company signs you up for a fancy new fiber connection, you are no longer connected to a power source at the telephone company’s central office. Instead, you get a battery that can last eight hours in case of a power failure. A local power failure may let you stay in contact with outsiders if the nearby mobile phone towers stay up, but a larger failure will take out everything.

These issues have a big impact on public safety, a concern raised at the beginning of the conference by Gregory Bialecki in his role as a Massachusetts official, and repeated by many others during the day.

There are ways around the new unreliability through redundant networks, as Feld pointed out during his panel. But the public and regulators must take a stand for reliability, as the post-Sandy victims have done. The issue in that case was whether a community could be served by wireless connections. At this point, they just don’t deliver either the reliability or the bandwidth that modern consumers need.

Mark Reilly of Comcast claimed at the conference that 94% of American consumers now have access to at least one broadband provider. I’m suspicious of this statistic because the telecom and cable companies have a very weak definition of “broadband” and may be including mobile phones in the count. Meanwhile, we face the possibility of a whole new digital divide consisting of people relegated to wireless service, on top of the old digital divide involving dial-up access.

We’ll take that market if you’re not interested

In a healthy market, at least three companies would be racing to roll out new services at affordable prices, but every new product or service must provide a migration path from the old ones it hopes to replace. Nowhere is this more true than in networks because their whole purpose is to let you reach other people. Competition in telecom has been a battle cry since the first work on the law that became the 1996 Telecom Act (and which many speakers at the conference say needs an upgrade).

Most of the 20th century accustomed people to thinking of telecom as a boring, predictable utility business, the kind that “little old ladies” bought stock in. The Telecom Act was supposed to knock the Bell companies out of that model and turn them into fierce innovators with a bunch of other competitors. Some people actually want to reverse the process and essentially nationalize the telecom infrastructure, but that would put innovation at risk.

The Telecom Act, especially as interpreted later by the FCC, fumbled the chance to enforce competition. According to Goldstein, the FCC decided that a duopoly (baby Bells and cable companies) were enough competition.

The nail in the coffin may have been the FCC ruling that any new fiber providing IP service was exempt from the requirements for interconnection. The sleight of hand that the FCC used to make this switch was a redefinition of the Internet: they conflated the use of IP on the carrier layer with the bits traveling around above, which most people think of as “the Internet.” But the industry and the FCC had a bevy of arguments (including the looser regulation of cable companies, now full-fledged competitors of the incumbent telecom companies), so the ruling stands. The issue then got mixed in with a number of other controversies involving competition and control on the Internet, often muddled together under the term “network neutrality.”

Ironically, one of the selling points that helps maintain a competitive company, such as Granite Telecom, is reselling existing copper. Many small businesses find that the advantages of fiber are outweighed by the costs, which may include expensive quality-of-service upgrades (such as MPLS), new handsets to handle VoIP, and rewiring the whole office. Thus, Senior Vice President Sam Kline announced at the conference that Granite Telecom is adding a thousand new copper POTS lines every day.

This reinforces the point I made earlier about depending on consumers to drive change. The calculus that leads small businesses to stick with copper may be dangerous in the long run. Besides lost opportunities, it means sticking with a technology that is aging and decaying by the year. Most of the staff (known familiarly as Bellheads) who designed, built, and maintain the old POTS network are retiring, and the phone companies don’t want to bear the increasing costs of maintenance, so reliability is likely to decline. Kline said he would like to find a way to make fiber more attractive, but the benefits are still vaporware.

At this point, the major companies and the smaller competing ones are both cherry picking in different ways. The big guys are upgrading very selectively and even giving up on some areas, whereas the small companies look for niches, as Granite Telecom has. If universal service is to become a reality, a whole different actor must step up to the podium.

A beautiful day in the neighborhood

One hope for change is through municipal and regional government bodies, linked to local citizen groups who know where the need for service is. Freenets, which go back to 1984, drew on local volunteers to provide free Internet access to everyone with a dial-up line, and mesh networks have powered similar efforts in Catalonia and elsewhere. In the 1990s, a number of towns in the US started creating their own networks, usually because they had been left off the list of areas that telecom companies wanted to upgrade.

Despite legal initiatives by the telecom companies to squelch municipal networks, they are gradually catching on. The logistics involve quite a bit of compromise (often, a commercial vendor builds and runs the network, contracting with the city to do so), but many town managers swear that advantages in public safety and staff communications make the investment worthwhile.

The limited regulations that cities have over cable companies (a control that sometimes is taken away) is a crude instrument, like a potter trying to manipulate clay with tongs. To craft a beautiful work, you need to get your hands right on the material. Ideally, citizens would design their own future. The creation of networks should involve companies and local governments, but also the direct input of citizens.

National governments and international bodies still have roles to play. Burke pointed out that public safety issues, such as 911 service, can’t be fixed by the market, and developing nations have very little fiber infrastructure. So, we need large-scale projects to achieve universal access.

Several speakers also lauded state regulators as the most effective centers to handle customer complaints, but I think the IP transition will be increasingly a group effort at the local level.

Back to school

Education emerged at the conference as one of the key responsibilities that companies and governments share. The transition to digital TV was accompanied by a massive education budget, but in my home town, there are still people confused by it. And it’s a minuscule issue compared to the task of going to fiber, wireless, and IP services.

I had my own chance to join the educational effort on the evening following the conference. Friends from Western Massachusetts phoned me because they were holding a service for an elderly man who had died. They lacked the traditional 10 Jews (the minyan) required by Jewish law to say the prayer for the deceased, and asked me to Skype in. I told them that remote participation would not satisfy the law, but they seemed to feel better if I did it. So I said, “If Skype will satisfy you, why can’t I just participate by phone? It’s the same network.” See, FCC? I’m doing my part.

Read the whole story
pantulis
3966 days ago
reply
Alcorcón
Share this story
Delete

U.S. Mobile Internet Traffic Nearly Doubled This Year

1 Share
An anonymous reader sends this news from the NY Times Bits Blog: "Two big shifts happened in the American cellphone industry over the past year: Cellular networks got faster, and smartphone screens got bigger. In the United States, consumers used an average of 1.2 gigabytes a month over cellular networks this year, up from 690 megabytes a month in 2012, according to Chetan Sharma, a consultant for wireless carriers, who published a new report on industry trends on Monday. Worldwide, the average consumption was 240 megabytes a month this year, up from 140 megabytes last year, he said."

Read more of this story at Slashdot.


    






Read the whole story
pantulis
3968 days ago
reply
Alcorcón
Share this story
Delete

The Adventures of Flatman

3 Comments and 14 Shares
a_romance_in_n_dimensions
Read the whole story
pantulis
4041 days ago
reply
Alcorcón
Share this story
Delete
3 public comments
jhamill
4031 days ago
reply
Crap. I think like that a lot at work but, I do work with idiots so...
California
emdeesee
4041 days ago
reply
Moral: whenever you talk to anyone, you might actually be talking to the three dimensional cross-section of a hyper-intelligent pan-dimensional being.
Sherman, TX
madth3
4041 days ago
reply
Well, you know...

Scripting News: Google, Twitter and Facebook, et al have a way out.

1 Share

It's great that there's a discussion online today about whether or not the tech companies had a way to resist the US govt, if they believed that it was wrong to share information about their users without users knowing.

There is a way around it. They could reverse the process of centralizing user information on their servers.

When they found the web, Google, Twitter and Facebook, it was a completely decentralized network from a content standpoint.

  • (It's never been decentralized at a transport level. There are several main peering points, and the name system is a hierarchy.)

Google and Facebook could have, together, easily defined new standards for distributing information in ways that would make it harder for the government to tap in. At least they could have avoided being responsible for it themselves.

Or they could have been supportive of standards that decentralize, like one that's dear to me -- RSS. Instead they undermined it. In Google's case, in a fairly horrific way. Did they ever say they'd never come back to RSS if we manage to reboot it after cleaning up their mess? A mess that they offered absolutely no help with.

Twitter had the biggest opportunity to create a free-flowing federated network of free users. They could have given us a new layer the way the web did in 1992. Instead, they sucked in all the energy created by developers and did the same thing the others did -- centralized. Goodbye freedom. Hello NSA.

They brought this on, they're the cause of the mess we're in now.

I have no sympathy for them. They could still get out of the hotseat. There would be nothing illegal about them telling the world that they made a huge mistake by centralizing everything, and now they're going to reverse the process. They don't have to say what the consequences of that mistake are, we all know, thanks to Glenn Greenwald.

What could the government do? They'd be alone.

Of course, no one in their right mind believes they would do it.

Read the whole story
pantulis
4168 days ago
reply
Alcorcón
Share this story
Delete

Need for Exercises

1 Share

For many years, I have learned various subjects (mostly programming related, like languages and frameworks) purely by reading a book, blog posts or tutorials on the subjects, and maybe doing a few samples.

In recent years, I "learned" new programming languages by reading books on the subject. And I have noticed an interesting phenomenon: when having a choice between using these languages in a day-to-day basis or using another language I am already comfortable with, I go for the language I am comfortable with. This, despite my inner desire to use the hot new thing, or try out new ways of solving problems.

I believe the reason this is happening is that most of the texts I have read that introduce these languages are written by hackers and not by teachers.

What I mean by this is that these books are great at describing and exposing every feature of the language and have some clever examples shown to you, but none of these actually force you to write code in the language.

Compare this to Scheme and the book "Structure and Interpretation of Computer Programs". That book is designed with teaching in mind, so at the end of every section where a new concept has been introduced, the authors have a series of exercises specifically tailored to use the knowledge that you just gained and put it to use. Anyone that reads that book and does the exercises is going to be a guaranteed solid Scheme programmer, and will know more about computing than from reading any other book.

In contrast, the experience of reading a modern computing book from most of the high-tech publishers is very different. Most of the books being published do not have an educator reviewing the material, at best they have an editor that will fix your English and reorder some material and make sure the proper text is italicized and your samples are monospaced.

When you finish a chapter in a modern computing book, there are no exercises to try. When you finish it, your choices are to either take a break by checking some blogs or keep marching in a quest to collect more facts on the next chapter.

During this process, while you amass a bunch of information, at some neurological level, you have not really mastered the subject, nor gained the skills that you wanted. You have merely collected a bunch of trivia which most likely you will only put to use in an internet discussion forum.

What books involving an educator will do is include exercises that have been tailored to use the concepts that you just learned. When you come to this break, instead of drifting to the internet you can sit down and try to put your new knowledge to use.

Well developed exercises are an application of the psychology of Flow ensuring that the exercise matches the skills that you have developed and they guide you through a path that keeps you in an emotional state ranging that includes control, arousement and joy (flow).

Anecdote Time

Back in 1988 when I first got the first edition of the "C++ Language", there were a couple of very simple exercises in the first chapter that took me a long time to get right and they both proved very educational.

The first exercises was "Compile Hello World". You might think, thing, that is an easy one, I am going to skip that. But I had decided that I was going to do each and every single of one of the exercises in the book, no matter how simple. So if the exercise said "Build Hello World", I would build Hello World, even if I was already seasoned assembly language programmer.

It turned out that getting "Hello World" to build and run was very educational. I was using the Zortech C++ compiler on DOS back, and getting a build turned out to be almost impossible. I could not get the application to build, I got some obscure error and no way to fix it.

It took me days to figure out that I had the Microsoft linker in my path before the Zortech Linker, which caused the build to fail with the obscure error. An important lesson right there.

On Error Messages

The second exercise that I struggled with was a simple class. The simple class was missing a semicolon at the end. But unlike modern compilers, the Zortech C++ compiler at the time error message was less than useful. It took a long time to spot the missing semicolon, because I was not paying close enough attention.

Doing these exercises trains your mind to recognize that "useless error message gobble gobble" actually means "you are missing a semicolon at the end of your class".

More recently, I learned in this same hard way that the F# error message "The value or constructor 'foo' is not defined" really means "You forgot to use 'rec' in your let", as in:

let foo x = if x == 1 1 else foo (x-1) 

That is a subject for another post, but the F# error message should tell me what I did wrong at a language level, as opposed to explaining to me why the compiler is unable to figure things out in its internal processing of the matter.

Plea to book authors

Nowadays we are cranking books left and right to explain new technologies, but rarely do these books get the input from teachers and professional pedagogues. So we end up accumulating a lot of information, we sound lucid at cocktail parties and might even engage in a pointless engineering debate over features we barely master. But we have not learned.

Coming up with the ideas to try out what you have just learned is difficult. As you think of things that you could do, you quickly find that you are missing knowledge (discussed in further chapters) or your ideas are not that interesting. In my case, my mind drifts into solving other problems, and I go back to what I know best.

Please, build exercises into your books. Work with teachers to find the exercises that match the material just exposed and help us get in the zone of Flow.

Read the whole story
pantulis
4213 days ago
reply
Alcorcón
Share this story
Delete
Next Page of Stories