New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rule proposal: prefer TypedArray
s over node:buffer
#1885
Comments
Is there a rule for this in some plug-in? |
Generally I'm positive towards using TypedArrays instead, that's how I work myself. It might be a bit too strict though, and I think that we should evaluate each case thoroughly to make sure that it's strictly better in all ways.
Hex is common to use for e.g. ids or to write out small numbers. If I recall correctly, Base64 inside JSON that is then GZip-compressed is almost as small as the raw data. And it has the big upside of working in a lot of places that already support JSON (e.g. GraphQL comes to mind). I don't think that Standard should disallow Hex or Base64 encodings...
This sounds nice, but we should benchmark that there isn't a penalty to creating a |
I would expect a rule like this to appear in eg https://github.com/sindresorhus/eslint-plugin-unicorn before we add it here
There should be a rule for this somewhere if the ecosystem has adopted this and then we can add it here and check for how much it breaks and when the breakage is small enough we can do PR:s to fix the last few places that breaks and then we can merge it here. |
if you want to send both JSON and binary data then you could do something like const files = [file1, file2, file3]
const fd = new FormData()
fd.set('json', JSON.stringify({
lastModified: files.map(file => file.lastModified)
}))
file1.forEach(file => fd.append('fileUploads', file))
fetch('/upload', { body: fd, ... }) and then on the receiver (server) side, you could do: function income(req, res) {
const fd = await new Response(req, { headers: req.headers }).formData()
const files = fd.getAll('fileUploads')
const json = JSON.parse(fd.get('json'))
} And there you have both a JSON + binary format supported transport You could even do the receive part on the client side Another thing i like to do is just to simply encode a custom central json directory blob that describe how large each file is and then simply just encode everything into a new I have also started to like and use cbor-x cuz it's better then any JSON format. (it recently got support for encoding blob's as well without having to read them until you actually need it) (just how the blob constructor only creates references point instead of copying the data honestly wish that we got |
This doesn't help me if I want to add e.g. If Base64 inside JSON that is then GZip-compressed is comparable in size, then what benefit do I get by doing all that work? That said, CBOR seems cool and it would be interesting to use it instead of JSON as the transport layer in e.g. GraphQL. But I think that that change needs to come from other projects, instead of a linter pushing that.
I think that this was very well put 👍 |
Description
This rule is more targeted for cross platform compatibility reasons.
node:buffer
is a node specific module and i mostly just see it as something that bloats browser bundles and runs slower (in none NodeJS contexts - or usingnpm:buffer
instead ofnode:buffer
).TextEncoder
/TextDecoder
can be used instead to turn your buffer to / from string / Uint8Array and is also actually much faster thenBuffer.from(str)
(which originally use string_decoder) in the browser,buffer.slice
is also overridinguint8array.slice
with subarray, which is also unexpected and now also deprecatedBuffer.isBuffer
can be replaced withinstaceof Uint8Array
to be acceptable of bothUint8Array
andnode:buffer
. something even better would be to accept any TypedArray usingArrayBuffer.isView
buf.toString()
I bet if you read this issue then you will maybe also be convinced that
node:buffer
is just unnecessary.Fail
Pass
Additional Info
Replacing
node:buffer
withUint8Array
can be a lot. and maybe not that easy to autofixMaybe dividing it up to smaller task could be easier.
like
TextDecoder
overbuf.toString()
andnode:string_encoder
instanceof Uint8Array
overBuffer.isBuffer
or something like that...The text was updated successfully, but these errors were encountered: