[Sle-hpc] Microservices and HPC

Nicolas Bock nicolas.bock at suse.com
Fri Jun 2 08:36:33 MDT 2017


On Fri, Jun 02, 2017 at 02:02:15PM +0200, Nicolas Morey-Chaisemartin wrote:
>
>
>Le 02/06/2017 à 12:40, Nicolas Bock a écrit :
>> Hi Nicolas,
>>
>> On Fri, Jun 02, 2017 at 10:20:53AM +0200, Nicolas Morey-Chaisemartin wrote:
>>> The issue I see with this, is that it only applies to loosely-coupled parallel applications. At least to be worth comparing to MPI stuff on Infiniband.
>>>
>> You raise a good point. All of the applications I am aware of make use of MPI and assume small latency values for their communications.
>>
>>> On Infiniband EDR (100Gb/s, HDR at 200Gb/s is now available but couldn't find any figure), the application-to-application latency is ~0.7uS which is miles away from what Ethernet can do.
>>> So for tightly coupled applications with a lot of sync, I don't think microservice make sense.
>>
>> You are conflating the network fabric with the software approach. They are independent. If I ran an MPI based application on Ethernet I wouldn't expect good performance either.
>
>Agreed BUT you will always have better performance using MPI than using a REST API.
>Mainly because it was designed for this purpose only :)
>
>>
>>> For applications that works on very large independant dataset, it feels like a good idea though.
>>> I don't know exactly how much of scientific applications this represents but seeing the amount of libs/runtimes and other tools for fast communications that exists, I'd say not much...
>>
>> I don't see why fundamentally a microservices based approach will suffer from higher latencies. Microservices on top of Infiniband would get the same network performance and it's down to the potentially additional call overhead for the microservice, which one could optimize to an extend. MPI calls aren't overhead free either.
>
>Because there are multiple way to use Infiniband.
>What MPI does remove all interaction with the kernel which saves a lot of time avoiding context switches, IT, etc.
>
>I don't see how a microservice "server" which to forward queries to the appropriate compute kernels could do the same trickery.
>Just as an example, MPI over IB (some implementations at least) aggressively polls the last few bytes of reception buffer to check if the data has arrived.
>No interruption, no notification, no callback. Just pure data. And because it owns the core it runs on, the CPU is not wasted or stolen from another app.
>
>Dynamic services usually cannot do this kind of things.

Hi Nicolas,

what do you think of implementing a proof of concept to see how 
far we can take this?

Best,

Nick

>Nicolas
>
>>
>> I completely agree that the core issue is the availability of software libraries. For a microservices based approach to become useful one would need a software library or framework that can be used just as easily as e.g. OpenMPI.
>>
>> Best,
>>
>> Nick
>>
>>> Nicolas
>>>
>>> Le 6/1/17 à 11:52 PM, Nicolas Bock a écrit :
>>>> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application.
>>>>
>>>> I'll keep digging...
>>>>
>>>> [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509
>>>>
>>>> On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote:
>>>>> Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket?
>>>>>
>>>>> Just some ponderings.
>>>>>
>>>>> Sent from my iPhone. Typos are Apple's fault.
>>>>>
>>>>>> On May 31, 2017, at 5:31 PM, David Byte <dbyte at suse.com> wrote:
>>>>>>
>>>>>> I’ve been chewing on this for a bit and thought I’d throw the question into the wild.  Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective?  E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment?  The end result being an IFTTT process for building HPC workflows?
>>>>>>
>>>>>> David Byte
>>>>>> Sr. Technology Strategist
>>>>>> Alliances and SUSE Embedded
>>>>>> dbyte at suse.com
>>>>>> 918.528.4422
>>>>>> _______________________________________________
>>>>>> sle-hpc mailing list
>>>>>> sle-hpc at lists.suse.com
>>>>>> http://lists.suse.com/mailman/listinfo/sle-hpc
>>>>
>>>>> _______________________________________________
>>>>> sle-hpc mailing list
>>>>> sle-hpc at lists.suse.com
>>>>> http://lists.suse.com/mailman/listinfo/sle-hpc
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> sle-hpc mailing list
>>>> sle-hpc at lists.suse.com
>>>> http://lists.suse.com/mailman/listinfo/sle-hpc
>>>
>>
>
>

-- 
Nicolas Bock <nicolas.bock at suse.com>
Cloud Software Engineer
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nürnberg
Tel: +49-911-74053-0; Fax: +49-911-7417755;  https://www.suse.com/
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
https://keybase.io/nicolasbock
Key fingerprint = 3593 0140 1B6C BC6E 931F  BA9A 6BFC 7B4E A873 28F0
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 981 bytes
Desc: not available
URL: <http://lists.suse.com/pipermail/sle-hpc/attachments/20170602/c0882577/attachment.sig>


More information about the sle-hpc mailing list