How to find/identify large files/commits in Git history?

By | December 9, 2017

I’ve got a git repo of 300 MB. My currently checked-out files weigh 2 MB, and the git repo weighs 298 MB. This is basically a code-only repo that should not weigh more than a few MB.

Most likely, somebody at some point committed some heavy files by accident (video, huge images, etc), and then removed them…but not from git, so we have a history with useless large files. How can I track down the large files in the git history? There are 400+ commits, so going one by will be time-consuming.

NOTE: my question is not about how to remove the file, but how to find it in the first place.


I’ve found this script very useful in the past for finding large (and non-obvious) objects in a git repository:

#set -x 

# Shows you the largest objects in your repo's pack file.
# Written for osx.
# @see
# @author Antony Stubbs

# set the internal field spereator to line break, so that we can iterate easily over the verify-pack output

# list all objects including their size, sort by size, take top 10
objects=`git verify-pack -v .git/objects/pack/pack-*.idx | grep -v chain | sort -k3nr | head`

echo "All sizes are in kB's. The pack column is the size of the object, compressed, inside the pack file."

allObjects=`git rev-list --all --objects`
for y in $objects
    # extract the size in bytes
    size=$((`echo $y | cut -f 5 -d ' '`/1024))
    # extract the compressed size in bytes
    compressedSize=$((`echo $y | cut -f 6 -d ' '`/1024))
    # extract the SHA
    sha=`echo $y | cut -f 1 -d ' '`
    # find the objects location in the repository tree
    other=`echo "${allObjects}" | grep $sha`
    #lineBreak=`echo -e "\n"`

echo -e $output | column -t -s ', '

That will give you the object name (SHA1sum) of the blob, and then you can use a script like this one:

… to find the commit that points to each of those blobs.


I’ve found a one-liner solution on ETH Zurich Department of Physics wiki page (close to the end of that page). Just do a git gc to remove stale junk, and then

git rev-list --objects --all | grep "$(git verify-pack -v .git/objects/pack/*.idx | sort -k 3 -n | tail -10 | awk '{print$1}')"

will give you the 10 largest files in the repository.

There’s also a lazier solution now available, GitExtensions now has a plugin that does this in UI (and handles history rewrites as well).

GitExtensions 'Find large files' dialog


🚀 A blazingly fast shell one-liner 🚀

This shell script displays all blob objects in the repository, sorted from smallest to largest.

For my sample repo, it ran about 100 times faster than the other ones found here.
On my trusty Athlon II X4 system, it handles the Linux Kernel repository with its 5.6 million objects in just over a minute.

The Base Script

git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' \
| awk '/^blob/ {print substr($0,6)}' \
| sort --numeric-sort --key=2 \
| cut --complement --characters=13-40 \
| numfmt --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest

When you run above code, you will get nice human-readable output like this:

0d99bb931299  530KiB path/to/some-image.jpg
2ba44098e28f   12MiB path/to/hires-image.png
bd1741ddce0d   63MiB path/to/some-video-1080p.mp4


To achieve further filtering, insert any of the following lines before the sort line.

To exclude files that are present in HEAD, insert the following line:

| grep -vF "$(git ls-tree -r HEAD | awk '{print $3}')" \

To show only files exceeding given size (e.g. 1 MiB = 220 B), insert the following line:

| awk '$2 >= 2^20' \

Output for Computers

To generate output that’s more suitable for further processing by computers, omit the last two lines of the base script. They do all the formatting. This will leave you with something like this:

0d99bb93129939b72069df14af0d0dbda7eb6dba 542455 path/to/some-image.jpg
2ba44098e28f8f66bac5e21210c2774085d2319b 12446815 path/to/hires-image.png
bd1741ddce0d07b72ccf69ed281e09bf8a2d0b2f 65183843 path/to/some-video-1080p.mp4

🚀 Fast File Removal 🚀

Suppose you then want to remove the files a and b from every commit reachable from HEAD, you can use this command:

git filter-branch --index-filter 'git rm --cached --ignore-unmatch a b' HEAD


You should use BFG Repo-Cleaner.

According to the website:

The BFG is a simpler, faster alternative to git-filter-branch for
cleansing bad data out of your Git repository history:

  • Removing Crazy Big Files
  • Removing Passwords, Credentials & other Private data

The classic procedure for reducing the size of a repository would be:

git clone --mirror git://
java -jar bfg.jar --strip-biggest-blobs 500 some-big-repo.git
cd some-big-repo.git
git reflog expire --expire=now --all
git gc --prune=now --aggressive
git push


Step 1 Write all file SHA1s to a text file:

git rev-list --objects --all | sort -k 2 > allfileshas.txt

Step 2 Sort the blobs from biggest to smallest and write results to text file:

git gc && git verify-pack -v .git/objects/pack/pack-*.idx | egrep "^\w+ blob\W+[0-9]+ [0-9]+ [0-9]+$" | sort -k 3 -n -r > bigobjects.txt

Step 3a Combine both text files to get file name/sha1/size information:

for SHA in `cut -f 1 -d\  < bigobjects.txt`; do
echo $(grep $SHA bigobjects.txt) $(grep $SHA allfileshas.txt) | awk '{print $1,$3,$7}' >> bigtosmall.txt

Step 3b If you have file names or path names containing spaces try this variation of Step 3a. It uses cut instead of awk to get the desired columns incl. spaces from column 7 to end of line:

for SHA in `cut -f 1 -d\  < bigobjects.txt`; do
echo $(grep $SHA bigobjects.txt) $(grep $SHA allfileshas.txt) | cut -d ' ' -f'1,3,7-' >> bigtosmall.txt

Now you can look at the file bigtosmall.txt in order to decide which files you want to remove from your Git history.

Step 4 To perform the removal (note this part is slow since it’s going to examine every commit in your history for data about the file you identified):

git filter-branch --tree-filter 'rm -f myLargeFile.log' HEAD


Steps 1-3a were copied from Finding and Purging Big Files From Git History


The article was deleted sometime in the second half of 2017, but an archived copy of it can still be accessed using the Wayback Machine.


If you only want to have a list of large files, then I’d like to provide you with the following one-liner (source at renuo):

join -o "1.1 1.2 2.3" <(git rev-list --objects --all | sort) <(git verify-pack -v objects/pack/*.idx | sort -k3 -n | tail -5 | sort) | sort -k3 -n

Whose output will be:

commit       file name                                  size in bytes

72e1e6d20... db/players.sql 818314
ea20b964a... app/assets/images/background_final2.png 6739212
f8344b9b5... data_test/pg_xlog/000000010000000000000001 1625545
1ecc2395c... data_development/pg_xlog/000000010000000000000001 16777216
bc83d216d... app/assets/images/background_1forfinal.psd 95533848

The last entry in the list points to the largest file in your git history.

You can use this output to assure that you’re not deleting stuff with BFG you would have needed in your history.


If you are on Windows, here is a PowerShell script that will print the 10 largest files in your repository:

$revision_objects = git rev-list --objects --all;
$files = $revision_objects.Split() | Where-Object {$_.Length -gt 0 -and $(Test-Path -Path $_ -PathType Leaf) };
$files | Get-Item -Force | select fullname, length | sort -Descending -Property Length | select -First 10


How can I track down the large files in the git history?

Start by analyzing, validating and selecting the root cause. Use git-repo-analysis to help.

You may also find some value in the detailed reports generated by BFG Repo-Cleaner, which can be run very quickly by cloning to a Digital Ocean droplet using their 10MiB/s network throughput.

Leave a Reply

Your email address will not be published. Required fields are marked *