File: paramspider.rst

package info (click to toggle)
paramspider 1.0.1-3
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 316 kB
  • sloc: python: 168; sh: 20; makefile: 7
file content (74 lines) | stat: -rw-r--r-- 1,551 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
===========
paramspider
===========

-------------------------------------------------------
Mining parameters from the dark corners of Web Archives
-------------------------------------------------------

:Author: Aquila Macedo <aquilamacedo@riseup.net>
:Date: 2024-03-27
:Copyright: Expat
:Version: 1.0.1
:Manual section: 1
:Manual group: paramspider

.. _paramspider:

SYNOPSIS
========
::

 paramspider [-h] [-d DOMAIN] [-l LIST] [-s] [--proxy PROXY] [-p PLACEHOLDER]

DESCRIPTION
===========
paramspider allows you to fetch URLs related to any domain or a list
of domains from Wayback Archives. It filters out "boring" URLs, allowing
you to focus on the ones that matter the most.

OPTIONS
=======

-h, \--help:
  Display command usage and options.

-d DOMAIN, \--domain DOMAIN:
  Domain name to fetch related URLs for.

-l LIST, \--list LIST:
  File containing a list of domain names.

-s, \--stream:
  Stream URLs on the terminal.

\--proxy PROXY
  Set the proxy address for web requests.

-p PLACEHOLDER, \--placeholder PLACEHOLDER
  Placeholder for parameter values.

EXAMPLES
========
Common usage:

Discover URLs for a single domain::

  $ paramspider -d example.com

Discover URLs for multiple domains from a file::

  $ paramspider -l domains.txt

Stream URLs on the terminal for a domain::

  $ paramspider -d example.com -s

Set up web request proxy::

  $ paramspider -d example.com --proxy '127.0.0.1:7890'

Adding a placeholder for URL parameter values (default: "FUZZ")::

  $ paramspider -d example.com -p '"><h1>reflection</h1>'