skip to main content

Les Houches Protein Dynamics Workshop

10/9/2019 3:45:37 PM

 

Les Houches-TSRC Protein Dynamics Workshop, June 7-12, 2020

I am one of the organizers of the Les Houches - TSRC Protein Dynamics Workshop. We cordially invite applications to participate in the fourth Les Houches – TSRC Protein Dynamics Workshop that will be held from June 7 – Jun 12, 2020 in Les Houches, close to Chamonix in the French Alps. This workshop is a forum for presenting, teaching and discussing results from the application of state-of-the-art experimental methods (including, but not limited to, optical spectroscopy, NMR spectroscopy, X-ray crystallography, XFELs, cryo electron microscopy and atomic-force microscopy and scattering methods), and theoretical and computational approaches to studying protein dynamics. 

About 30 invited speakers will give oral presentations comprising a pedagogic introduction to the methodology employed, followed by applications from their own work. Each 30-minute presentation will be followed by 15 minutes of discussion. In addition to the 30 invited speakers, there will be space for 30 student/postdoctoral participants that can present posters as well as short presentations of their work. 

 

The meeting will take place at the Ecole de Physique des Houches. The site, which has been in operation at its present location in the shadow of the French Alps since 1951, has a long tradition of hosting relatively small (< 100 attendees), focused workshops and schools in a secluded setting that stimulates intense discussion during formal presentations and promotes fruitful, informal interactions between all participants, including senior investigators, young investigators, postdoctoral researchers, and graduate students. The Ecole de Physique des Houches can be conveniently reached from Geneva Airport by shuttle-bus service within 80 minutes.

We have finalized the list of speakers, which you can find on our website www.tinyurl.com/protdyn2020. You may also find details about the place, venue etc on our web site and on the site of the Les Houches Physics School https://www.houches-school-physics.com/practical-information/access/

I am contacting you today to ask if you could help to announce the meeting by forwarding the text below and/or the attached poster, or printing and displaying the poster in your institute, or talking to colleagues and students about our workshop.

For further details and details about applications please visit www.tinyurl.com/protdyn2020

 

Thank you.

Paul

on behalf of the organizers

Enrica Bordignon (Ruhr Uni Bochum)
Matthias Heyden (Arizona State University)
Paul Schanda (Institut de Biologie Structurale Grenoble)
Ben Schuler (Universität Zürich)
Martin Weik (Institut de Biologie Structurale Grenoble)

 

 

 

2/18/2014 Jonathan Damery, ECE ILLINOIS

Most mammals have binocular vision. It helps squirrels, in the trees on campus, determine the distance between branches and make the leap. It helps basketball players toss buzzer-beaters from half court.

For computers, the same stereoscopic vision—taken with two or more offset cameras—can provide equally valuable 3-D information. Even static images can help with object recognition or, in the case of Google’s aerial maps, with creating 3-D cityscapes, complete with topography and correctly proportioned and shaded trees.

Now, the rate at which computers can extract that 3-D information is speeding up. ECE graduate student Jungwook Choi and Computer Science Department Head Rob A. Rutenbar—also an Abel Bliss Professor—have demonstrated one of the fastest video-rate implementations of this 3-D computer vision.

Jungwook Choi presenting at MEMOCODE last fall. Photo courtesy of MEMOCODE.
Jungwook Choi presenting at MEMOCODE last fall. Photo courtesy of MEMOCODE.
Last fall, their design earned them top honors for the best accuracy-adjusted performance at the MEMOCODE design competition, held in Portland by IEEE and the Association for Computing Machinery (ACM). 

With video-rate stereo matching, computers could recognize gestures more readily, and the technology could play an important role in the move toward driverless vehicles. Already automakers like Mercedes-Benz and Volvo have added pedestrian detection to some models, where stereo images, coupled with radar, are used to warn the driver of nearing pedestrians and—if necessary—apply the brakes.

“In such a case, speed of stereo matching is critical,” Choi said. “The faster stereo matching is done, the more chance the car can avoid the collision.”

In general though, Choi indicated that video-rate stereo matching, while highly important, is just one piece of a larger puzzle. The whole picture—the focus of his overall research—is developing customizable hardware that allows computers to interpret observations more quickly.

To do this, Choi and Rutenbar utilized a type of algorithm known as belief propagation, which, in the case of stereo matching, establishes probable guesses about the spatial depth of pixels in an image. Belief propagation is also widely used in artificial intelligence. Speech recognition, for example, often uses some form of belief propagation when choosing between homophones, interpreting accents, and so forth.

“Belief propagation methods have been researched intensively [over the past decade] and achieved huge success in practice,” Choi said. “But still, there has been a missing step between algorithmic solutions and their realization in the real world applications…mainly due to slow speed.”

Often there’s a trade-off between speed and accuracy, but Choi and Rutenbar were able to achieve both. They employed a belief propagation algorithm known as sequential tree-reweighted inference (TRW-S), which, reportedly, had never been demonstrated at video rates. These algorithms traditionally begin in one section of an image and, as the name implies, move sequentially, pixel by pixel, through the rest. It’s an inherently slow but reliable process. 

To achieve video rates, the team turned to customizable hardware.

“Jungwork devised some very clever architectural tricks to expose lots of useful parallelism,” Rutenbar said.

Rob A. Rutenbar. Photo by L. Brian Stauffer.
Rob A. Rutenbar. Photo by L. Brian Stauffer.
“We can be doing lots of work on different parts of the image concurrently.”

Their experimental results achieve a rate of 12 frames per second, which is significantly faster than other belief-propagation approaches, demonstrated recently.

The team used a Convey HC-1 computer system, which includes customizable integrated circuits known as field-programmable gate arrays. “Stereo matching requires a huge amount of computation memory bandwidth,” Choi explained. “That’s why people have tried to implement stereo matching algorithms on multi-cores…or graphic processors for real time execution, but they are fundamentally restricted in the way of allocating computing power and memory bandwidth.”

Instead, using the field-programmable gate arrays, Choi could fully optimize the system.

Part of the team’s success, therefore, depended on this interdisciplinary approach: the algorithms and machine learning expertise came from the realm computer science, while the hardware customization stemmed from electrical engineering. 

Now, as these algorithms and hardware implementations continue to improve, there’s no doubt that consumers will begin to enjoy the benefits. Already, their system could be translated into real-world applications like that pedestrian detection system.

“One could take the hardware designs into a more custom silicon form, reduce cost and power, and make something with real practical relevance, in a pretty straightforward way,” Rutenbar said. The only question now is just how fast that will happen.