Print Friendly, PDF & Email

At recent HullPIC conference users and developers of hull performance monitoring software discussed how to include the future technology to do a more effective job. By Volker Bertram and Thomas Wägener

Volker Bertram (DNV GL), co-organizer of the event, rates the conference as a success. The number of participants rose[ds_preview] by 35% compared to the premiere event last year in Marorka, Italy, he says. At this year’s HullPIC in Ulrichshusen, Germany, about 110 international maritime experts from all over the world discussed how a ship’s hull can be further improved. »The high number of participants shows that the topics that we talked about are interesting for the maritime industry,« Bertram reasons.

Many of the participants ccoming from about 20 different countries were from Scandinavia, but also from the Netherlands, Asia, Singapore and the USA. For nearly all of them the network character of the event was at least as important as the topics that were discussed. For Søren Hattel (Force Technology) it was interesting to see »that other companies have the same issues as us.« For Bjarte Lund (Kyma) it was also good to see »what the competitors are doing«. Furthermore he points out the great diversity of topics that were discussed at HullPIC.

The conference ended with a forum discussion that had – somewhat cheekily – been given the title »Operators and Developers – Galaxies apart?« To make it short, the answer to this question appears to be »no«. On most points, the operator representatives (Rory Kennedy from cruise ship operator Royal Caribbean and Mike Servos from Tsakos Columbia Shipmanagement, operating oil tankers, bulkers and containerships) agreed with the developer representatives (Manolis Levantis of Jotun using a simulation-based approach and Matti Antola from Eniram using a machine-learning approach). Moderator Michael vom Baur (HANSA) tried to sound out differences using pointed remarks, but the panelists just wouldn’t bite – they remained true to the atmosphere of HullPIC where sober assessment of factual difficulties was the rule leaving little space for catch phrasing. Still, a clearer picture evolved and also the forum was unanimously seen as a success.

The forum discussed first the key question that was on the mind of probably most of the HullPIC participants: Where are we with ISO 19030? More specifically, vom Baur prodded the operators: In view of all the uncertainties presented at the conference (and there were many), how useful is performance monitoring? Mike Servos brought it to a point: »Five years ago we had nothing, now we have at least something.« Rory Kennedy agreed, taking a positive angle on the state of the standard and performance monitoring in the industry. RCCL has started with performance monitoring some years ago and made already some impressive progress with double-digit savings. It is a process that will continue, both on the development and on the implementation. So – surprise – the users think better than the developers and the maritime world in general would have thought.

Is machine learning the way forward or is it simulation? Perhaps it was gentlemen behavior, perhaps simple realization of the intricacies of performance monitoring: The developers of the more simulation-based approach recalled the importance of calibrating models against in-service experience and the developers of the more machine-learning based approach mentioned the usefulness of virtual sensors and good hydrodynamic models. As in HullPIC 2016, the maritime industry was once more reminded that all systems on the market are »grey«, using hydrodynamic modelling with some system identification.

The discussion then turned towards a theme that had woven like a red thread through the conference: sensors, human input and how we can reduce errors in the input data for performance monitoring. »With all the discussions about speed logs, torque meters, ambient conditions, etc. – should we focus on getting black boxes or should we focus on the ›human factor‹?« And touching on autonomous technology, should machines do the job (of collecting and monitoring) or humans?

On this point, all panelists and quite a few members of the audience commented, but the positions were surprisingly close again. The consensus was that it was not a question of »either – or«, but »both« played a vital role. All operators agreed that having transparent and timely feedback to the crew improved motivation and data quality. But data frequency and good algorithms are important, too. Daniel Schmode (DNV GL) expanded on the theme of his paper on how to reduce errors in performance monitoring: The crew is key to get bias (= systematic errors) down; data frequency is key to get noise (= random errors and scatter) down. In the end, you need both for good performance monitoring.

For this point, the discussion went naturally to »Big Data«. It is coming, but as such nobody seemed to be overly impressed. »A lot of data still does not equal a lot of insight« received a lot of agreeing nods both from the operators and the developers. But using smart filters to automatically identify wrong data was seen as a likely way performance monitoring will evolve. Information fusion combining various sensors, on-line services (e.g. for weather conditions or AIS speed and course data) and human data reporting is on horizon.

In summary, the forum discussion took a rather positive view of the ISO 19030 and the state of performance monitoring. Both the standard and the implementation in the industry are deemed to be work in progress. The way forward is cooperative sharing of experience.
Volker Bertram, Thomas Wägener