You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
constapi={}asany;// my dataproviderasyncfunctionmain(){// first ids levelconstIds: number[]=awaitapi.getIds();Bluebird.map(Ids,async(id)=>{constobject: {name: string,childIds: number[]}=awaitapi.getObject(id);// second ids levelconstsubId: number[]=object.childIds;Bluebird.map(subId,async(subid)=>{constsubObject=awaitapi.getSubElm(id,subId);// third ids level// subObject contains some childs that will be iterate with a third Bluebird.map call},{concurrency: 50})},{concurrency: 50})}
this code can start up to 50 x 50 concurrent calls to the API.
with a 4 level loop, the concurrency value can be 20 x 20 x 20 x 20
but allowing 160 000 concurrent can will blow up the nodejs process.
If the ids array size vary a lots from empty / tiny to very large, there is no way to have a constant processing speed.
it's possible to refactor this code to improve linear performances, by preloading each level of API call, but that force me to pre-load each API layer in memory.
in java I can use Executors.newFixedThreadPool(max_threads); to fix a number of parallels threads.
Regarding your first example, what should happen when the outer map eats up all available resources from the pool and the inner map gets an empty pool?
The outer map won't close until all of his child get resolved..
In the worst case the outer loop stay running because a single task is not completer,
the same think append in the second map.
and the 3th map is active with all his concurrent Promise.
Hi,
This is a code pattern I have commonly:
With the current Bluebird
this code can start up to 50 x 50 concurrent calls to the API.
with a 4 level loop, the concurrency value can be 20 x 20 x 20 x 20
but allowing 160 000 concurrent can will blow up the nodejs process.
A real life 3 level sample can be found here:
If the ids array size vary a lots from empty / tiny to very large, there is no way to have a constant processing speed.
it's possible to refactor this code to improve linear performances, by preloading each level of API call, but that force me to pre-load each API layer in memory.
in java I can use
Executors.newFixedThreadPool(max_threads);
to fix a number of parallels threads.With some new features
something like:
to limit the number of concurrent task to 250.
or
In this case:
The text was updated successfully, but these errors were encountered: