Skip the navigation

Coders must reprogram how they write for Wall Street

Parallel programming knowledge is becoming a must-have skill

September 24, 2010 06:03 AM ET

Computerworld - As high-performance computing (HPC) becomes more important in helping financial services companies deal with a rising tsunami of data, there's growing angst on Wall Street about a dearth of skilled programmers who can write for multicore chip architectures and parallel computing systems.

"In high-performance computing, there is a major sea change that's been happening... and it's getting more dramatic," said Jeffrey Birnbaum, chief technology architect at Bank of America/Merrill Lynch. "With the sea change that's coming -- parallel computing, multicore processers -- the skill of the programmer matters more."

Given that the financial services industry is often an early adopter of technology that eventually trickles down into other markets, the skills Wall Street coders need now are likely to be the same ones that other coders will need in the future.

Birnbaum talked about why programmers should hone their skills during a presentation at the High Performance Computing in Financial Markets Conference in New York this week.

Birnbaum stressed that programmers need to do more than simply tackle languages, such as Assembly, that can take advantage of parallel computing. They need to be more skillful with even more traditional programming languages.

Related News

Oracle database design slowed Chase online banking fix

The outage last week at JPMorgan Chase's online banking site is an example of how pushing to maintain absolute data integrity could end up creating big problems for companies, a veteran database analyst cautioned yesterday.

"All things being equal, sure, there is a difference in performance [between languages]," Birnbaum said. "So your best guy who programs in Assembly will be marginally better than your greatest C or C++ guy. And [he] will easily beat the best Java guy. But that's not the point. Bad programmers create bad code. It doesn't matter what language they use."

The rise of distributed computing

About five years ago, Moore's Law ran into a dead end in terms of CPUs that could keep up with application performance requirements, according to Charles King, principal analyst at research firm Pund-IT. That led to the emergence of multicore processors and parallel, or distributed, computing -- the ability to spread a complex programming task among many CPUs.

Yet most programmers haven't yet embraced parallel programming, with as many as 98% still relying on serial coding methods, King said. The main issue is this: Programming for parallel architectures is complicated.

In the financial services industry, a parallel computing architecture often relies on hundreds or even thousands of x86 servers all working on a single data set that has been divided up to spread the workload. As the work is completed, the data set must be put back together in an automated fashion.



Our Commenting Policies