780

One can request only the headers using HTTP HEAD, as option -I in curl(1).

$ curl -I /

Lengthy HTML response bodies are a pain to get in command-line, so I'd like to get only the header as feedback for my POST requests. However, HEAD and POST are two different methods.

How do I get cURL to display only response headers to a POST request?

11 Answers 11

965
-D, --dump-header <file>
       Write the protocol headers to the specified file.

       This  option  is handy to use when you want to store the headers
       that a HTTP site sends to you. Cookies from  the  headers  could
       then  be  read  in  a  second  curl  invocation by using the -b,
       --cookie option! The -c, --cookie-jar option is however a better
       way to store cookies.

and

-S, --show-error
       When used with -s, --silent, it makes curl show an error message if it fails.

from the man page. so

curl -sS -D - www.acooke.org -o /dev/null

follows redirects, dumps the headers to stdout and sends the data to /dev/null (that's a GET, not a POST, but you can do the same thing with a POST - just add whatever option you're already using for POSTing data)

note the - after the -D which indicates that the output "file" is stdout.

Sign up to request clarification or add additional context in comments.

15 Comments

above comment is valid if you're using powershell. for cmd.exe use curl -s -D - http://yahoo.com -o nul
@JJS for me $null worked on Win7. Is it due to cLink installed on windows.
The "-" in front of the URL may seem unimportant, but it's not.
@WahidSadik Why's is that the case in particular? What's the function of the single dash?
@mamachanko -D takes an argument that says where the output should go. the single dash means it should go to stdout.
|
306

The other answers require the response body to be downloaded. But there's a way to make a POST request that will only fetch the header:

curl -s -I -X POST http://www.google.com

An -I by itself performs a HEAD request which can be overridden by -X POST to perform a POST (or any other) request and still only get the header data.

11 Comments

This answer is actually correct because web servers can return different headers based on request method. If you want to check headers on GET, you have to use GET request.
This is the most correct answer, in my opinion. It is easy to remember, it actually sends GET request and doesn't download the whole response body (or at least doesn't output it). The -s flag is nor necessary.
This does not work when you actually want to POST some data. Curl says: Warning: You can only select one HTTP request method! You asked for both POST Warning: (-d, --data) and HEAD (-I, --head).
@nickboldt The point here is that a server might respond differently to a HEAD request than to a POST or GET request (and some servers actually do that), so -X HEAD is no reliable solution here.
I get Warning: You can only select one HTTP request method! You asked for both POST Warning: (-d, --data) and HEAD (-I, --head).
|
112

The Following command displays extra informations

curl -X POST http://httpbin.org/post -v > /dev/null

You can ask server to send just HEAD, instead of full response

curl -X HEAD -I http://httpbin.org/

Note: In some cases, server may send different headers for POST and HEAD. But in almost all cases headers are same.

12 Comments

It's unfortunate that the other answer won, because this is the correct answer - it doesn't unnecessarily transfer a ton of data.
@dmd If I understand the cURL manual for -X, --request correctly, -X HEAD still results in “a ton of data” but there is -I, --head which should results in what you are anticipating.
Problem with -X HEAD is that the server might respond differently, since it now receives a HEAD request instead of a GET (or whatever the previous request was)
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the Warning: way you want. Consider using -I/--head instead.
@bfontaine a perfect example of a XY-Problem
|
62

For long response bodies (and various other similar situations), the solution I use is always to pipe to less, so

curl -i https://api.github.com/users | less

or

curl -s -D - https://api.github.com/users | less

will do the job.

2 Comments

these are not equivalent. the first issues a HEAD request to which many servers respond differently. the second issues a GET request which is more like what we are looking for here.
This is useful, but not an answer to the question. Therefore, I am voting down.
39

Much easier – this also follows links.

curl -IL http://shortlinktrack.er/in-the-shadows
  • -I is an alias of --head, the man page states that it fetch the headers only
  • -L is an alias of --location, the man page states that curl will follow the location header if there is one

1 Comment

Awesome! Clean and simple. Thank you!
34

Maybe it is little bit of an extreme, but I am using this super short version:

curl -svo. <URL>

Explanation:

-v print debug information (which does include headers)

-o. send web page data (which we want to ignore) to a certain file, . in this case, which is a directory and is an invalid destination and makes the output to be ignored.

-s no progress bar, no error information (otherwise you would see Warning: Failed to create the file .: Is a directory)

warning: result always fails (in terms of error code, if reachable or not). Do not use in, say, conditional statements in shell scripting...

7 Comments

Why use -o. instead of -o /dev/null?
@bfontaine -o. is used versus -o /dev/null for brevity
@bfontaine there are other answers that show how to do this the most correct way, this one is here to show the short alternative that does the same thing basically.
You should clarify in your answer that this command always fails. curl -svo. <url> && echo foo won’t print foo because -o. make curl return a non-zero (= error) code: curl: (23) Failed writing body.
a "solution" that ends with returning an error is not a valid solution. it's a happy accident. if something goes wrong, you have no way of knowing because you've already swallowed the error
|
16

While the other answers have not worked for me in all situations, the best solution I could find (working with POST as well), taken from here:

curl -vs 'https://some-site.com' 1> /dev/null

2 Comments

I had to put the url between quotes to get this working.
Whether this is necessary or not might depend on url and used shell. I improved the answer accordingly. Thanks.
13

headcurl.cmd (windows version)

curl -sSkv -o NUL %* 2>&1
  • I don't want a progress bar -s,
  • but I do want errors -S,
  • not bothering about valid https certificates -k,
  • getting high verbosity -v (this is about troubleshooting, is it?),
  • no output (in a clean way).
  • oh, and I want to forward stderr to stdout, so I can grep against the whole thing (since most or all output comes in stderr)
  • %* means [pass on all parameters to this script] (well(https://stackoverflow.com/a/980372/444255), well usually that's just one parameter: the url you are testing

real-world example (on troubleshooting proxy issues):

C:\depot>headcurl google.ch | grep -i -e http -e cache
Hostname was NOT found in DNS cache
GET HTTP://google.ch/ HTTP/1.1
HTTP/1.1 301 Moved Permanently
Location: http://www.google.ch/
Cache-Control: public, max-age=2592000
X-Cache: HIT from company.somewhere.ch
X-Cache-Lookup: HIT from company.somewhere.ch:1234

Linux version

for your .bash_aliases / .bash_rc:

alias headcurl='curl -sSkv -o /dev/null $@  2>&1'

4 Comments

This will download the body and consume bandwidth, time. @siracusa 's answer (stackoverflow.com/a/38679650/6168139) doesn't have this overhead.
If & when you want POST, add -X POST to the passthrough parameters, if you want GET, use GET (i.e. default), as responses may differ. - Unless you do heavy curling in production scripts (not for diagnosis anddevelopment) I don't care about a bit of bandwidth.
I am planning it to see if files on server are updated or not using 'Last-Modified'. The files in themselves are large, some are in GBs, and I am usually on cellular internet. So, this large bandwidth is an issue for me.
That would be hacky. I don't need to do this as siracusa's answer performs the task accurately.
7

The -w, --write-out <format> option can be very helpful. You can get all http headers, or a single one:

$ curl -s -w '%{header_json}' https://httpbin.org/get -o /dev/null
{"date":["Sun, 18 Feb 2024 13:47:12 GMT"],
"content-type":["application/json"],
"content-length":["254"],
"server":["gunicorn/19.9.0"],
"access-control-allow-origin":["*"],
"access-control-allow-credentials":["true"]
}

$ curl -s -w '%header{content-type}' https://httpbin.org/get -o /dev/null
application/json

read more

Comments

0

-D, --dump-header Write the protocol headers to the specified file.

   This  option  is handy to use when you want to store the headers
   that a HTTP site sends to you. Cookies from  the  headers  could
   then  be  read  in  a  second  curl  invocation by using the -b,
   --cookie option! The -c, --cookie-jar option is however a better
   way to store cookies.

Comments

0

To get only the response header, use the silent output -s along side -i, then output only the first 10 lines using the head command.

curl -si 0:80 | head

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.