i have successfully implemented compression and decompression of files using 7zip in python but i am not able to figure out how to integrate the same with django uploaded file so that whenever a file is uploaded ,a 7zp format for the same file is create and stored in the disk.
I know two ways to work this out: if you are simply looking to compress images, you can refer to this dev.to article on compressing images specifically. The other way to work this out is requiring users to upload in the form of zip files as described here, but change the file extensions to .zip, .rar, or whatever you deem necessary
There are ways to automatically zip all types of files such as this pypi package, but I'm not sure how those will work out as it's not a mainstream package. If you want to conserve upload space, try setting a size bound instead.
In general, if the 7z file is small, the second method is worth using, because it doesn't require you to install an ISO creation program. However, if the 7z file is large, I recommend you to use the first method, because uploading or downloading process in method 2 may take a long time.
In Long:I use to split large archives. When I upload online those files, those are checksum-checked, so if I try to upload a backup ( simply renamed ) of those files, they are not added because they have the same checksum as the original files.
However, nothing is perfect, and sometimes, some parts I upload, when I try to access them later ( even from another pc - because this way is the best metod to transfer files between pc not concurrently connected, overwhelming email attachments ) can get corrupted, or the link is slow, or can happen server crashes.
As now, whenever I backup something, I make a 10% par2 redundancy, and as now this has been enough.However, for some archives, I want one or two full redundant backups.Achieve this with only par2, as now, is impraticable because rebuilds would take hours at 100% cpu, and this costs time and electricity.Actually, I am re-archiving files wich I want to backup, but this makes new archives which are incompatible with the former ones. So, if I can get only partially the former archive, and the new archive, there is no way I can combine them to gain original files. I have made 2 backups with double point of failure.Instead, if I am able to upload two/three interchangeable sets, I get only 1/2 or 1/3 of point of failure, because all the same files needs to get corrupted to make the archive unreadable, but at that point, I can use my 10% par2 redundancy to rebuild the missing part for all backups.
I thought about that long and hard and I would like to point out some facts:- that stuff bon-de-rado said. Why dont you just add some "panning", upload the files, and remove the panning before extracting?- uploading 3 same files is still (some) flooding. If you need 100 files, you will upload 300, if 10000 you will upload 30.000 and so on. It's still the same web-portal. It's not without reason that they do have this checksum-rejecter.- since you expect the same portal to not host your files correctly (having errors in 3 same files on different locations within that file) you should maybe think about changing that portal to some hoster who can be trusted. I have quite some cheap webspace and use (s-)ftp - never got any problems. Connection errors occur, but I can resume automatically on the last position.- breaking backwards-compatibility: if 7z sees a file and its not in the spec it will quit. Panned files are not in the spec. So you would break the compatiblity. Though: I really don't get why you just can't remove the panning before extracting. Can you clarify that?
If you are a good scripter you could maybe script this stuff.Also you should make a nice database of all your chunks on those hosters.A daily job should go through each hoster and alert you if some chunks cannot be reached anymore.An automated correction job should re-upload those missing junks using data from the other hosters.
-Trimming is not as simple in batching as panning - panning is just a "copy /b file1+file2 file_panned", however with some trick, it is still possible.But i thought that adding and EOF marker would not be so difficult if you already know how the split files are made - and just asking cost nothing-Actually the checksum is not to reject, but to substitute: let's say an online file gets corrupted or the server it is on has a crash: if youcan reup the file, all the references of the preceding just point to the new, so if you keep a database linking to all those files, no record areto be changed. So, we can say that the checksum is done in a way "client-wise".-I'm not talking about error when you upload: it's obvious that if you have this error, you can just be sure to resume or reset the upload and makeit complete. I'm talking when you try to access it later and you don't have the original files on your HD any more. It's not a matter of trustedor not hoster: as now, on thousands of files, I got only one error server-side. There is just some sensitive-data I want to backup for longer time soI want to be protected even in worst case- I don't understand what you say about backwards-compatibility: if you pan one-whole 7z file, because it is already wrapped and has the footer EOFmarker, it is still readable. The problem raises only on splitted archives, which have no EOF marker in each split. If you are referring to openingnew files with older version of 7zip, is like asking to open a LZMA2 7z with the 4.XX version, and this is not right. The rules is that archives madewith older 7z versions will always be readable by newer releases, but new archives have the requirement to be read only starting from the releasewith which they are made - and a EOF marker keeps this rule true.As I already said, I thought that just asking would not harm and can be a nice addition for who wants to keep files online like me, and trimmingis not as natural as panning. In best case, it needs double HD space, to accommodate new files, and time to generate them. Natural reading of panned files, would not need no new space no any more time.
I think you have the better explain what you mean with flooding. I pay for unlimited space, is it not my right to use it? If I place 3 GB online, who cares if they are all different files, or 3 times 1 GB of same files? Whatever check is made online is only for a reason - to allow the files to be referred always to the same link - think like if you shared a file to a your teamate, so instead to send him via email the whole file you just send him a link, than there is a server problem and file is unreachable or damaged, you just have to upload it again and original link you sent is the same. If links would have been always different, you would have to keep track of whonever you sent it and update them with the new link. In short, the checksum is computed to give file references long-term stability, if you can reup online the file if needed.
NotesUploaded files will be deleted immediately. We do NOT store your files.
This site is using rar2john and zip2john and 7z2john from JohnTheRipper tools to extract the hash
The goal of this page is to make it very easy to convert your ZIP / RAR / 7zip archive files (.rar, .zip, .7z) to "hashes" which hashcat/john can crack
We can also attempt to recover its password: send your file on our homepage
How to use?More than easy, just select and upload your ZIP / RAR / 7zip archive file (Max size: 200 Mb). The hash will be computed in the "Output" part.
If your data is confidential and you do not wish to share your hydraulic model publicly, the below Bentley Sharefile system can be used to securely upload the model where only Bentley Support staff have access. Models sent to Bentley will not be shared outside of Bentley.
3. Enter all applicable information in the fields provided and click "Upload Files". Please see further below for the files needed to be included in the Zip. An activity indicator will appear identifying that the upload is taking place. Once complete, an informational message indicates if the upload is successful or not.IMPORTANT: If the upload is successful you should confirm you have uploaded the file(s) and note the file name(s) in the "Forum thread URL" forum thread. Note that the file(s) you upload will ONLY be accessible to Bentley colleagues -- they will NOT appear in the Communities forum thread.
WobZip is a free service to unzip compressed files online. In addition to RAR files, it supports uncompressing many popular archive formats including 7z, Gzip, UDF, VHD, CAB, and CHM. You can either upload the file from your computer or simply enter the URL of the supported file to begin the extraction process.
WobZip also lets you extract password protected archives. All you need to do is enter the password for the compressed file after uploading the archive. The maximum upload limit is 200 MB per file. Despite 200 MB limitation, WobZip is the probably the best online tool out there to extract your compressed files.
I have a 500GB external drive full of data. I need to use this drive so, since I have 1TB of storage in OneDrive, I want to uploaded the content to free up the drive. I have a quite good connection too, so I figured it wouldn't take too long. The thing is that I can't find a way to upload this folder with the OneDrive Windows app.
Looking online, everyone proposed to move folders to my computer and then upload it, but that isn't an option since I don't have enough space in it. The other option is using the web app, but it is very unreliable and will stop working more often than not. Besides, if it stops, the upload has to start from the beginning.
You may be running into the 20GB OneDrive upload limit.Divide the files into manageable folders, and make compressed archives of those folders. You can file folders inside other folders. Use a visualizer like to identify which folders you need to break into sub-folders. 041b061a72