SIGGRAPH - Adobe execs talk up R&D, new projects

11.08.2006
The Association for Computing Machinery held its annual Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH) conference in Boston earlier this month, paying special tribute to Martin Newell, an Adobe Fellow at Adobe Systems Inc. Computerworld's Dan Tennant spoke with Newell and David Story, Adobe's vice president of Digital Imaging Product Development, about their work at Adobe, how the development process unfolds, and kinds of products they'd like to be working on. Excerpts from the interview follow.

One of you is an Adobe Fellow involved in research and development; The other is a software engineer working on end products. How do the two of you interact with one-another? Newell: I run a group of eight people -- I'm recruiting at the moment -- called the Advanced Graphics Technology group [also known as the Labs], which has been around for 10 years, doing research and development that's relevant to Adobe's product groups. In the past, my guys would effectively become members of the product teams and actually see products through to shipping.

Story: We'd have a guy from Labs come and sit with us for six or nine months to debug a product, to get the UI right, to make it work on Macintosh and Windows and wherever else it needed to run, to make it run a lot faster, and to file all the bugs.

Newell: The people in the product groups are software engineers: they know all that stuff, and they're really good at it. But my guys are not -- they're not trained for it. It was a misuse of their true skill, but that's the way it was for many years. Then Dave came along, with a background in and frustration with the same kind of problem. He took engineers from the Photoshop team and made them responsible for bringing technology into the product groups from outside. Now, those guys work closely with us, and it's made our lives way, way easier. Though we work with a product group on a technology right up until shipping, we're working on the stuff underneath -- the engine, not the small things. It's way more efficient, and while our R&D groups are not very big compared with some other companies, we pride ourselves at being extremely directed.

What kind of evolution have you seen transform the development model since you two came together two years ago? Story: If you look at a [typical] product that's been in development for five years, the feature development time out of that five-year cycle is probably less than a third. In a traditional waterfall model, a third of the cycle goes to design, a third to development, a third to debugging. Design, develop, debug, and then start over. But we've been radically changing the development process with products like Adobe Lightroom, where we're putting it out in public -- every two months, we're doing releases. Lightroom has been in development for almost three years; We've got several hundred thousand people using it right now. One of the product groups actually lived in Labs to incubate the idea. We moved the whole team over to my group to continue to develop it and now we're doing a development model where we actually put it out in public. We experiment with a lot of different models, and we can't afford to have [Labs] tied up for six to nine months to make an idea come through. We've had to change the way we think about the pipeline of innovation.

Is Lightroom's public development process one that future products will continue and/or draw from? Newell: We hope so; getting a new 1.0 out the door is an enormous hurdle. Most of Adobe's products were acquired and then further developed -- actually developing a whole new product in-house (like Lightroom) is the exception, not the rule. We've got these venture capitalists within Labs that have the funds that can support seed products, and we encourage people to come in from the rest of the company if they've got a good idea that they can't work on outside Labs. We evaluate their business plan, and if it looks like it'll be a good investment, we'll give them the seed funding they need, with the expectation that they'll then go back to the product groups and run with it.

You mentioned earlier that you were looking to recruit. Computerworld recently ran a series of stories detailing the growing stagnation of the hiring pool. Have you guys had trouble tracking down qualified candidates? Newell: Absolutely, and the majority of applicants that we see who are anywhere near qualified are not U.S. citizens -- they're people from overseas who are here on temporary visas. That's why it really hurts when the H1-B program reaches its quota. Last time I went through the recruiting process, I was looking for two people, only two people, and it took me a year to find them. We had dozens of [applicants] come through with a very good filtering mechanism in place. We had people in who had written papers. They'd refereed journals and conferences. They would come in and give us great presentations. They were buzzword-compliant -- in short, these guys were really good. And then we'd pass them out to the guys on my team to interview them, saying "Make sure they know the basics." When we'd get together afterwards and go around the table, everyone would give the thumbs-down. You get these people in who can work at a certain level, but they don't know the fundamentals. We asked ourselves, 'Is it important for these people to understand linear algebra and integral calculus? Shouldn't these people know something as basic the binary representation of -1?' Well, yes!

Hypothetical: You're given the opportunity to develop a single piece of technology to incorporate into Adobe's product line in one day -- no work, no pain, no cost. What would you develop? Newell: I would like a product out there which is able to, from a single image of an everyday scene, completely understand and recognize everything in that scene; a product that could tell me everything about all the objects, including the people and who they are, so that the necessity of tagging faces in photographs becomes obsolete. All the things that a 10-year-old would be able to tell me, I want the computer to tell me: 'That's a man running. That's a woman smiling.' Based on that product, we could build an image-retrieval system.

We're moving in that direction already, where the kind of processing that we want to do is based on the content of the image -- what's behind the image. Some of the work we have done is in face-finding; It's baby steps, the first towards realizing this dream.

And if I were to ask you what your current research project is, you would say? Newell: Baby steps.