From nicolas.bock at suse.com Thu Jun 1 15:52:17 2017 From: nicolas.bock at suse.com (Nicolas Bock) Date: Thu, 1 Jun 2017 15:52:17 -0600 Subject: [Sle-hpc] Microservices and HPC In-Reply-To: References: <619B2FC3-F6AC-4ABA-8B2D-8FA2DFF13867@suse.com> Message-ID: <20170601215217.yuljubygg5ofubpk@rubberducky> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application. I'll keep digging... [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509 On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote: >Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket? > >Just some ponderings. > >Sent from my iPhone. Typos are Apple's fault. > >> On May 31, 2017, at 5:31 PM, David Byte wrote: >> >> I?ve been chewing on this for a bit and thought I?d throw the question into the wild. Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective? E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment? The end result being an IFTTT process for building HPC workflows? >> >> David Byte >> Sr. Technology Strategist >> Alliances and SUSE Embedded >> dbyte at suse.com >> 918.528.4422 >> _______________________________________________ >> sle-hpc mailing list >> sle-hpc at lists.suse.com >> http://lists.suse.com/mailman/listinfo/sle-hpc >_______________________________________________ >sle-hpc mailing list >sle-hpc at lists.suse.com >http://lists.suse.com/mailman/listinfo/sle-hpc -- Nicolas Bock Cloud Software Engineer SUSE Linux GmbH, Maxfeldstr. 5, D-90409 N?rnberg Tel: +49-911-74053-0; Fax: +49-911-7417755; https://www.suse.com/ SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) https://keybase.io/nicolasbock Key fingerprint = 3593 0140 1B6C BC6E 931F BA9A 6BFC 7B4E A873 28F0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 981 bytes Desc: not available URL: From nicolas.bock at suse.com Fri Jun 2 04:40:11 2017 From: nicolas.bock at suse.com (Nicolas Bock) Date: Fri, 2 Jun 2017 04:40:11 -0600 Subject: [Sle-hpc] Microservices and HPC In-Reply-To: <8660866a-f64f-ec53-745f-43da0897f437@suse.de> References: <619B2FC3-F6AC-4ABA-8B2D-8FA2DFF13867@suse.com> <20170601215217.yuljubygg5ofubpk@rubberducky> <8660866a-f64f-ec53-745f-43da0897f437@suse.de> Message-ID: <20170602104011.jybwn6tzmp33waha@rubberducky> Hi Nicolas, On Fri, Jun 02, 2017 at 10:20:53AM +0200, Nicolas Morey-Chaisemartin wrote: >The issue I see with this, is that it only applies to >loosely-coupled parallel applications. At least to be worth >comparing to MPI stuff on Infiniband. > You raise a good point. All of the applications I am aware of make use of MPI and assume small latency values for their communications. >On Infiniband EDR (100Gb/s, HDR at 200Gb/s is now available but couldn't find any figure), the application-to-application latency is ~0.7uS which is miles away from what Ethernet can do. >So for tightly coupled applications with a lot of sync, I don't think microservice make sense. You are conflating the network fabric with the software approach. They are independent. If I ran an MPI based application on Ethernet I wouldn't expect good performance either. >For applications that works on very large independant dataset, it feels like a good idea though. >I don't know exactly how much of scientific applications this represents but seeing the amount of libs/runtimes and other tools for fast communications that exists, I'd say not much... I don't see why fundamentally a microservices based approach will suffer from higher latencies. Microservices on top of Infiniband would get the same network performance and it's down to the potentially additional call overhead for the microservice, which one could optimize to an extend. MPI calls aren't overhead free either. I completely agree that the core issue is the availability of software libraries. For a microservices based approach to become useful one would need a software library or framework that can be used just as easily as e.g. OpenMPI. Best, Nick >Nicolas > >Le 6/1/17 ? 11:52 PM, Nicolas Bock a ?crit : >> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application. >> >> I'll keep digging... >> >> [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509 >> >> On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote: >>> Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket? >>> >>> Just some ponderings. >>> >>> Sent from my iPhone. Typos are Apple's fault. >>> >>>> On May 31, 2017, at 5:31 PM, David Byte wrote: >>>> >>>> I?ve been chewing on this for a bit and thought I?d throw the question into the wild. Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective? E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment? The end result being an IFTTT process for building HPC workflows? >>>> >>>> David Byte >>>> Sr. Technology Strategist >>>> Alliances and SUSE Embedded >>>> dbyte at suse.com >>>> 918.528.4422 >>>> _______________________________________________ >>>> sle-hpc mailing list >>>> sle-hpc at lists.suse.com >>>> http://lists.suse.com/mailman/listinfo/sle-hpc >> >>> _______________________________________________ >>> sle-hpc mailing list >>> sle-hpc at lists.suse.com >>> http://lists.suse.com/mailman/listinfo/sle-hpc >> >> >> >> >> _______________________________________________ >> sle-hpc mailing list >> sle-hpc at lists.suse.com >> http://lists.suse.com/mailman/listinfo/sle-hpc > -- Nicolas Bock Cloud Software Engineer SUSE Linux GmbH, Maxfeldstr. 5, D-90409 N?rnberg Tel: +49-911-74053-0; Fax: +49-911-7417755; https://www.suse.com/ SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) https://keybase.io/nicolasbock Key fingerprint = 3593 0140 1B6C BC6E 931F BA9A 6BFC 7B4E A873 28F0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 981 bytes Desc: not available URL: From NMoreyChaisemartin at suse.de Fri Jun 2 06:02:15 2017 From: NMoreyChaisemartin at suse.de (Nicolas Morey-Chaisemartin) Date: Fri, 2 Jun 2017 14:02:15 +0200 Subject: [Sle-hpc] Microservices and HPC In-Reply-To: <20170602104011.jybwn6tzmp33waha@rubberducky> References: <619B2FC3-F6AC-4ABA-8B2D-8FA2DFF13867@suse.com> <20170601215217.yuljubygg5ofubpk@rubberducky> <8660866a-f64f-ec53-745f-43da0897f437@suse.de> <20170602104011.jybwn6tzmp33waha@rubberducky> Message-ID: Le 02/06/2017 ? 12:40, Nicolas Bock a ?crit : > Hi Nicolas, > > On Fri, Jun 02, 2017 at 10:20:53AM +0200, Nicolas Morey-Chaisemartin wrote: >> The issue I see with this, is that it only applies to loosely-coupled parallel applications. At least to be worth comparing to MPI stuff on Infiniband. >> > You raise a good point. All of the applications I am aware of make use of MPI and assume small latency values for their communications. > >> On Infiniband EDR (100Gb/s, HDR at 200Gb/s is now available but couldn't find any figure), the application-to-application latency is ~0.7uS which is miles away from what Ethernet can do. >> So for tightly coupled applications with a lot of sync, I don't think microservice make sense. > > You are conflating the network fabric with the software approach. They are independent. If I ran an MPI based application on Ethernet I wouldn't expect good performance either. Agreed BUT you will always have better performance using MPI than using a REST API. Mainly because it was designed for this purpose only :) > >> For applications that works on very large independant dataset, it feels like a good idea though. >> I don't know exactly how much of scientific applications this represents but seeing the amount of libs/runtimes and other tools for fast communications that exists, I'd say not much... > > I don't see why fundamentally a microservices based approach will suffer from higher latencies. Microservices on top of Infiniband would get the same network performance and it's down to the potentially additional call overhead for the microservice, which one could optimize to an extend. MPI calls aren't overhead free either. Because there are multiple way to use Infiniband. What MPI does remove all interaction with the kernel which saves a lot of time avoiding context switches, IT, etc. I don't see how a microservice "server" which to forward queries to the appropriate compute kernels could do the same trickery. Just as an example, MPI over IB (some implementations at least) aggressively polls the last few bytes of reception buffer to check if the data has arrived. No interruption, no notification, no callback. Just pure data. And because it owns the core it runs on, the CPU is not wasted or stolen from another app. Dynamic services usually cannot do this kind of things. Nicolas > > I completely agree that the core issue is the availability of software libraries. For a microservices based approach to become useful one would need a software library or framework that can be used just as easily as e.g. OpenMPI. > > Best, > > Nick > >> Nicolas >> >> Le 6/1/17 ? 11:52 PM, Nicolas Bock a ?crit : >>> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application. >>> >>> I'll keep digging... >>> >>> [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509 >>> >>> On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote: >>>> Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket? >>>> >>>> Just some ponderings. >>>> >>>> Sent from my iPhone. Typos are Apple's fault. >>>> >>>>> On May 31, 2017, at 5:31 PM, David Byte wrote: >>>>> >>>>> I?ve been chewing on this for a bit and thought I?d throw the question into the wild. Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective? E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment? The end result being an IFTTT process for building HPC workflows? >>>>> >>>>> David Byte >>>>> Sr. Technology Strategist >>>>> Alliances and SUSE Embedded >>>>> dbyte at suse.com >>>>> 918.528.4422 >>>>> _______________________________________________ >>>>> sle-hpc mailing list >>>>> sle-hpc at lists.suse.com >>>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>> >>>> _______________________________________________ >>>> sle-hpc mailing list >>>> sle-hpc at lists.suse.com >>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>> >>> >>> >>> >>> _______________________________________________ >>> sle-hpc mailing list >>> sle-hpc at lists.suse.com >>> http://lists.suse.com/mailman/listinfo/sle-hpc >> > From nicolas.bock at suse.com Fri Jun 2 06:48:55 2017 From: nicolas.bock at suse.com (Nicolas Bock) Date: Fri, 2 Jun 2017 06:48:55 -0600 Subject: [Sle-hpc] Microservices and HPC In-Reply-To: References: <619B2FC3-F6AC-4ABA-8B2D-8FA2DFF13867@suse.com> <20170601215217.yuljubygg5ofubpk@rubberducky> <8660866a-f64f-ec53-745f-43da0897f437@suse.de> <20170602104011.jybwn6tzmp33waha@rubberducky> Message-ID: <20170602124855.wm6ixv74u44go3mo@rubberducky> On Fri, Jun 02, 2017 at 02:02:15PM +0200, Nicolas Morey-Chaisemartin wrote: > > >Le 02/06/2017 ? 12:40, Nicolas Bock a ?crit : >> Hi Nicolas, >> >> On Fri, Jun 02, 2017 at 10:20:53AM +0200, Nicolas Morey-Chaisemartin wrote: >>> The issue I see with this, is that it only applies to loosely-coupled parallel applications. At least to be worth comparing to MPI stuff on Infiniband. >>> >> You raise a good point. All of the applications I am aware of make use of MPI and assume small latency values for their communications. >> >>> On Infiniband EDR (100Gb/s, HDR at 200Gb/s is now available but couldn't find any figure), the application-to-application latency is ~0.7uS which is miles away from what Ethernet can do. >>> So for tightly coupled applications with a lot of sync, I don't think microservice make sense. >> >> You are conflating the network fabric with the software approach. They are independent. If I ran an MPI based application on Ethernet I wouldn't expect good performance either. > >Agreed BUT you will always have better performance using MPI than >using a REST API. Mainly because it was designed for this purpose >only :) I fully agree here. It comes down to how much of an impact a REST API has. >> >>> For applications that works on very large independant dataset, it feels like a good idea though. >>> I don't know exactly how much of scientific applications this represents but seeing the amount of libs/runtimes and other tools for fast communications that exists, I'd say not much... >> >> I don't see why fundamentally a microservices based approach will suffer from higher latencies. Microservices on top of Infiniband would get the same network performance and it's down to the potentially additional call overhead for the microservice, which one could optimize to an extend. MPI calls aren't overhead free either. > >Because there are multiple way to use Infiniband. >What MPI does remove all interaction with the kernel which saves >a lot of time avoiding context switches, IT, etc. > >I don't see how a microservice "server" which to forward queries >to the appropriate compute kernels could do the same trickery. >Just as an example, MPI over IB (some implementations at least) >aggressively polls the last few bytes of reception buffer to >check if the data has arrived. No interruption, no notification, >no callback. Just pure data. And because it owns the core it runs >on, the CPU is not wasted or stolen from another app. > >Dynamic services usually cannot do this kind of things. I agree, there are certainly some differences and it might not be possible to achieve the same level of performance. I think the question is whether you can live with the performance impact in exchange for the advantages you get in terms of scaling and code use. Maybe it's just me, but I always found MPI to be tedious to use. I switched to Charm++ for a few projects and found it to be a lot easier to use. Although its performance wasn't quite up to an MPI implementation I felt that the advantages in coding outweigh the performance impact. But again, that's just me ;) Nick >Nicolas > >> >> I completely agree that the core issue is the availability of software libraries. For a microservices based approach to become useful one would need a software library or framework that can be used just as easily as e.g. OpenMPI. >> >> Best, >> >> Nick >> >>> Nicolas >>> >>> Le 6/1/17 ? 11:52 PM, Nicolas Bock a ?crit : >>>> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application. >>>> >>>> I'll keep digging... >>>> >>>> [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509 >>>> >>>> On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote: >>>>> Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket? >>>>> >>>>> Just some ponderings. >>>>> >>>>> Sent from my iPhone. Typos are Apple's fault. >>>>> >>>>>> On May 31, 2017, at 5:31 PM, David Byte wrote: >>>>>> >>>>>> I?ve been chewing on this for a bit and thought I?d throw the question into the wild. Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective? E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment? The end result being an IFTTT process for building HPC workflows? >>>>>> >>>>>> David Byte >>>>>> Sr. Technology Strategist >>>>>> Alliances and SUSE Embedded >>>>>> dbyte at suse.com >>>>>> 918.528.4422 >>>>>> _______________________________________________ >>>>>> sle-hpc mailing list >>>>>> sle-hpc at lists.suse.com >>>>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>>> >>>>> _______________________________________________ >>>>> sle-hpc mailing list >>>>> sle-hpc at lists.suse.com >>>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> sle-hpc mailing list >>>> sle-hpc at lists.suse.com >>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>> >> > > -- Nicolas Bock Cloud Software Engineer SUSE Linux GmbH, Maxfeldstr. 5, D-90409 N?rnberg Tel: +49-911-74053-0; Fax: +49-911-7417755; https://www.suse.com/ SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) https://keybase.io/nicolasbock Key fingerprint = 3593 0140 1B6C BC6E 931F BA9A 6BFC 7B4E A873 28F0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 981 bytes Desc: not available URL: From nicolas.bock at suse.com Fri Jun 2 08:36:33 2017 From: nicolas.bock at suse.com (Nicolas Bock) Date: Fri, 2 Jun 2017 08:36:33 -0600 Subject: [Sle-hpc] Microservices and HPC In-Reply-To: References: <619B2FC3-F6AC-4ABA-8B2D-8FA2DFF13867@suse.com> <20170601215217.yuljubygg5ofubpk@rubberducky> <8660866a-f64f-ec53-745f-43da0897f437@suse.de> <20170602104011.jybwn6tzmp33waha@rubberducky> Message-ID: <20170602143633.mlao3rbxtdfwgxhh@rubberducky> On Fri, Jun 02, 2017 at 02:02:15PM +0200, Nicolas Morey-Chaisemartin wrote: > > >Le 02/06/2017 ? 12:40, Nicolas Bock a ?crit : >> Hi Nicolas, >> >> On Fri, Jun 02, 2017 at 10:20:53AM +0200, Nicolas Morey-Chaisemartin wrote: >>> The issue I see with this, is that it only applies to loosely-coupled parallel applications. At least to be worth comparing to MPI stuff on Infiniband. >>> >> You raise a good point. All of the applications I am aware of make use of MPI and assume small latency values for their communications. >> >>> On Infiniband EDR (100Gb/s, HDR at 200Gb/s is now available but couldn't find any figure), the application-to-application latency is ~0.7uS which is miles away from what Ethernet can do. >>> So for tightly coupled applications with a lot of sync, I don't think microservice make sense. >> >> You are conflating the network fabric with the software approach. They are independent. If I ran an MPI based application on Ethernet I wouldn't expect good performance either. > >Agreed BUT you will always have better performance using MPI than using a REST API. >Mainly because it was designed for this purpose only :) > >> >>> For applications that works on very large independant dataset, it feels like a good idea though. >>> I don't know exactly how much of scientific applications this represents but seeing the amount of libs/runtimes and other tools for fast communications that exists, I'd say not much... >> >> I don't see why fundamentally a microservices based approach will suffer from higher latencies. Microservices on top of Infiniband would get the same network performance and it's down to the potentially additional call overhead for the microservice, which one could optimize to an extend. MPI calls aren't overhead free either. > >Because there are multiple way to use Infiniband. >What MPI does remove all interaction with the kernel which saves a lot of time avoiding context switches, IT, etc. > >I don't see how a microservice "server" which to forward queries to the appropriate compute kernels could do the same trickery. >Just as an example, MPI over IB (some implementations at least) aggressively polls the last few bytes of reception buffer to check if the data has arrived. >No interruption, no notification, no callback. Just pure data. And because it owns the core it runs on, the CPU is not wasted or stolen from another app. > >Dynamic services usually cannot do this kind of things. Hi Nicolas, what do you think of implementing a proof of concept to see how far we can take this? Best, Nick >Nicolas > >> >> I completely agree that the core issue is the availability of software libraries. For a microservices based approach to become useful one would need a software library or framework that can be used just as easily as e.g. OpenMPI. >> >> Best, >> >> Nick >> >>> Nicolas >>> >>> Le 6/1/17 ? 11:52 PM, Nicolas Bock a ?crit : >>>> I started doing some literature search on this and so far came up with a master thesis that looked at a microservice design of a parameter sweep of a reaction-diffusion simulation [1]. The parallel efficiency is okish I would say, but definitely encouraging. The study is pretty simple and more a proof of concept than a fully developed application. So far I haven't come up with anything else that uses microservices for a scientific application. >>>> >>>> I'll keep digging... >>>> >>>> [1] http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1045455&dswid=-5509 >>>> >>>> On Wed, May 31, 2017 at 08:05:34PM -0500, David Byte wrote: >>>>> Maybe we shouldn't constrain our thinking to the fabrics used today by micro services. Perhaps it is feasible to use an rdma or memory register instead of a socket? >>>>> >>>>> Just some ponderings. >>>>> >>>>> Sent from my iPhone. Typos are Apple's fault. >>>>> >>>>>> On May 31, 2017, at 5:31 PM, David Byte wrote: >>>>>> >>>>>> I?ve been chewing on this for a bit and thought I?d throw the question into the wild. Has anyone heard or seen a customer or partner thinking about HPC from a microservices perspective? E.g. can you break down genetic sequencing, CFD, etc into bite sized chunks where microservices may enable a more flexible plug and play type environment? The end result being an IFTTT process for building HPC workflows? >>>>>> >>>>>> David Byte >>>>>> Sr. Technology Strategist >>>>>> Alliances and SUSE Embedded >>>>>> dbyte at suse.com >>>>>> 918.528.4422 >>>>>> _______________________________________________ >>>>>> sle-hpc mailing list >>>>>> sle-hpc at lists.suse.com >>>>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>>> >>>>> _______________________________________________ >>>>> sle-hpc mailing list >>>>> sle-hpc at lists.suse.com >>>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> sle-hpc mailing list >>>> sle-hpc at lists.suse.com >>>> http://lists.suse.com/mailman/listinfo/sle-hpc >>> >> > > -- Nicolas Bock Cloud Software Engineer SUSE Linux GmbH, Maxfeldstr. 5, D-90409 N?rnberg Tel: +49-911-74053-0; Fax: +49-911-7417755; https://www.suse.com/ SUSE Linux GmbH, GF: Felix Imend?rffer, Jane Smithard, Graham Norton, HRB 21284 (AG N?rnberg) https://keybase.io/nicolasbock Key fingerprint = 3593 0140 1B6C BC6E 931F BA9A 6BFC 7B4E A873 28F0 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 981 bytes Desc: not available URL: