SIGN IN SIGN UP
Stuk / jszip UNCLAIMED

Create, read and edit .zip files with Javascript

0 0 0 JavaScript
2022-06-23 13:58:36 -07:00
"use strict";
var external = require("./external");
2022-06-23 13:58:36 -07:00
var DataWorker = require("./stream/DataWorker");
var Crc32Probe = require("./stream/Crc32Probe");
var DataLengthProbe = require("./stream/DataLengthProbe");
/**
* Represent a compressed object, with everything needed to decompress it.
* @constructor
* @param {number} compressedSize the size of the data compressed.
* @param {number} uncompressedSize the size of the data after decompression.
* @param {number} crc32 the crc32 of the decompressed file.
* @param {object} compression the type of compression, see lib/compressions.js.
* @param {String|ArrayBuffer|Uint8Array|Buffer} data the compressed data.
*/
function CompressedObject(compressedSize, uncompressedSize, crc32, compression, data) {
this.compressedSize = compressedSize;
this.uncompressedSize = uncompressedSize;
this.crc32 = crc32;
this.compression = compression;
this.compressedContent = data;
2013-10-13 12:56:07 -04:00
}
CompressedObject.prototype = {
/**
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
* Create a worker to get the uncompressed content.
* @return {GenericWorker} the worker.
2013-10-13 12:56:07 -04:00
*/
2021-01-14 22:15:55 +08:00
getContentWorker: function () {
var worker = new DataWorker(external.Promise.resolve(this.compressedContent))
2021-01-14 22:15:55 +08:00
.pipe(this.compression.uncompressWorker())
.pipe(new DataLengthProbe("data_length"));
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
var that = this;
worker.on("end", function () {
2022-06-23 13:58:36 -07:00
if (this.streamInfo["data_length"] !== that.uncompressedSize) {
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
throw new Error("Bug : uncompressed data size mismatch");
}
});
return worker;
2013-10-13 12:56:07 -04:00
},
/**
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
* Create a worker to get the compressed content.
* @return {GenericWorker} the worker.
2013-10-13 12:56:07 -04:00
*/
2021-01-14 22:15:55 +08:00
getCompressedWorker: function () {
return new DataWorker(external.Promise.resolve(this.compressedContent))
2021-01-14 22:15:55 +08:00
.withStreamInfo("compressedSize", this.compressedSize)
.withStreamInfo("uncompressedSize", this.uncompressedSize)
.withStreamInfo("crc32", this.crc32)
.withStreamInfo("compression", this.compression)
2022-06-23 13:58:36 -07:00
;
2013-10-13 12:56:07 -04:00
}
};
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
/**
* Chain the given worker with other workers to compress the content with the
* given compression.
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
* @param {GenericWorker} uncompressedWorker the worker to pipe.
* @param {Object} compression the compression object.
* @param {Object} compressionOptions the options to use when compressing.
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
* @return {GenericWorker} the new worker compressing the content.
*/
CompressedObject.createWorkerFrom = function (uncompressedWorker, compression, compressionOptions) {
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
return uncompressedWorker
2021-01-14 22:15:55 +08:00
.pipe(new Crc32Probe())
.pipe(new DataLengthProbe("uncompressedSize"))
.pipe(compression.compressWorker(compressionOptions))
.pipe(new DataLengthProbe("compressedSize"))
.withStreamInfo("compression", compression);
Rewrite code into workers This commit addresses the timeout issue. The current API is synchronous : if JSZip takes too much time to finish its task, the page crashes (it freezes during the task anyway). This commit does a the following : - rewrite the code into workers which can be asynchronous - add the needed public methods - add nodejs stream support - break the compatibility with existing code Workers ------- A worker is like a nodejs stream but with some differences. On the good side : - it works on IE 6-9 without any issue / polyfill - it weights less than the full dependencies bundled with browserify - it forwards errors (no need to declare an error handler EVERYWHERE) On the bad side : To get sync AND async methods on the public API without duplicating a lot of code, this class has `isSync` attribute and some if/then to choose between doing stuff now, or using an async callback. It is dangerously close to releasing Zalgo (see http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony for more). A chunk is an object with 2 attributes : `meta` and `data`. The former is an object containing anything (`percent` for example), see each worker for more details. The latter is the real data (String, Uint8Array, etc). Public API ---------- Each method generating data (generate, asText, etc) gain a stream sibling : generateStream, asTextStream, etc. This will need a solid discussion because I'm not really satified with this. Nodejs stream support --------------------- With this commit, `file(name, data)` accepts a nodejs stream as data. It also adds a `asNodejsStream()` on the StreamHelper. Breaking changes ---------------- The undocumented JSZip.compressions object changed : the object now returns workers to do the job, the previous methods are not used anymore. Not broken yet, but the the `checkCRC32` (when loading a zip file, it synchronously check the crc32 of every files) will need to be replaced.
2014-05-02 20:06:26 +02:00
};
module.exports = CompressedObject;