Skip to main content

HTTPDiff - Finding differences between HTTP responses

Project description

HTTPDiff

A library written for finding differences between HTTP responses.

Disclaimer

  • This is considered to be a beta release, and may contain bugs and unintentional behavior. Consider yourself warned!

About

HTTPDiff is a library built for finding differences between responses.

A lot of web pentesting tools suck, using regexes or hardcoded values to determine whether something is different. These methods will produce false-negatives no matter how much you tweak those values. HTTPDiff attempts to use a more dynamic way of differentiation responses, attempting to reduce the false-negatives produced during a scan.

By sending multiple requests with a known outcome, it is possible to calibrate a baseline of how the application normally behaves. HTTPDiff can then be used to find deviations from the default behavior. HTTPDiff will analyze every section of the response; the status code, reason, headers, body, response time, and even errors.

Usecases

  • Want to create a SQL injection scanner? Send a bunch of payloads with random strings for calibration, then send pairs of payloads (e.g. ' or '1'='1 and ' or '1'='2) and check for differences!

  • If you want to brute-force endpoints and directories on a web application, you can start by sending a series of requests to known invalid endpoints. The baseline can now be used to determine if any other endpoints behave in a similar way, or are somehow different. Go to Diffuzz for a good example on how to utilize this library.

Installation

python3 -m pip install httpdiff

How it all works

Here comes some details of how the library is built, feel free to skip this section if you're not interested:

Here's the process for calibrating:

  1. The add_response function in the Baseline object accepts a Response object as a parameter (among others), multiple Blobs are created, one for headers, reason (status code + message), response time, body etc.
  2. Each sections of the first response is stored as original_line.
  3. A new response is inputted.
  4. Levenshtein's algorithm (similar to git diff) is used to generate opcodes describing how to transform the original lines to the new lines.
  5. The opcodes can be used to determine where static strings are within the HTTP response, these indexes are stored as individual Item objects. 5.1. The insertions are also tracked, often it is possible to see insertions of same length between static strings.
  6. A new response is inputted.
  7. The indexes for all the static Item's are updated, removing any dynamic strings that were accidentaly equal the last time around.
  8. Repeat from step 6.

Here's the process for diffing:

  1. A new response is inputted.
  2. Opcodes are generated in a similar manner as in the calibration phase.
  3. Check if all known static strings are present, also check whether all known insertions occur at the expected index and are of expected length.
  4. Return a Diff object for each anomaly.
  5. (Optional) Find differences in two responses with expected different outcomes and compare the diffs.

Example usage

Go visit Diffuzz to see an awesome fuzzer utilizing HTTPDiff.

Here's a small example script showing how HTTPDiff can be used:

from httpdiff import Response, Baseline
import string
import random
import requests

def calibrate_baseline(baseline):
    for _ in range(10):
        value = "".join(random.choice(string.ascii_letters) for _ in range(random.randint(3,50)))
        resp = requests.get(f"https://someurl/endpoint?parameter={value}")
        httpdiff_resp = Response(resp)
        baseline.add_response(httpdiff_resp,payload=value) # Adding value as a parameter for finding reflections

    # Often smart to repeat a single payload twice including a potentially cached response in the baseline
    resp = requests.get(f"https://someurl/endpoint?parameter={value}")
    httpdiff_resp = Response(resp)
    baseline.add_response(httpdiff_resp,payload=value)

def scan(baseline):
    payload1 = "' or '1'='1"
    resp = requests.get(f"https://someurl/endpoint?parameter={payload1}")
    httpdiff_resp1 = Response(resp) 

    # payload2 in this example is supposed to contain a similar payload, but a different result if vulnerable. Kind of an opposite payload. 
    payload2 = "' or '1'='2"
    resp = requests.get(f"https://someurl/endpoint?parameter={payload2}")
    httpdiff_resp2 = Response(resp)

    diffs = list(baseline.find_diffs(httpdiff_resp1))
    diffs2 = list(baseline.find_diffs(httpdiff_resp2))
    if diffs != diffs2:
        print("Vulnerable to SQL Injection!")
    else:
        print("Not vulnerable to SQL injection!")

def main():
    baseline = Baseline()
    calibrate_baseline(baseline)
    scan(baseline)

if __name__ == "__main__":
    main()

Todo

  • Work on possibilities of tracking insertions better, can we detect changes even within insertions?
  • Improve method for diffing integer ranges
  • Properly handle errors
  • Do a lot more testing with this tool, bugs may still be present.

Tips

Some tips for successfully creating your own scanner of some sort:

  • Use random values of random length when calibrating a baseline
  • Use cachebusters
  • Repeat one set of values during calibration (to ensure potential cache hits are included in the baseline)
  • Verify the baseline upon a positive result
  • Verify the same payload a couple of times upon a positive result to verify it's not a fluke
  • Create an issue if you catch any mistakes in the library
  • Tell others about HTTPDiff

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

httpdiff-4.0.2.tar.gz (10.5 kB view details)

Uploaded Source

File details

Details for the file httpdiff-4.0.2.tar.gz.

File metadata

  • Download URL: httpdiff-4.0.2.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for httpdiff-4.0.2.tar.gz
Algorithm Hash digest
SHA256 2824ada0694f84e1bfc2eb1f59586470c1982605aaa921190eca1930b5e35815
MD5 b333b0337704259782fc43d17f5b5247
BLAKE2b-256 60de41dd24c07797d956c339cc17ad38dfe99f08735b9ee0ac9674a4514f2a8c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page