Many applications are used in closed networks where users and servers can (possibly) be trusted, but many others are used on arbitrary servers and are fed input from potentially untrusted users. Following is a discussion about some risks in the ways in which applications commonly use libcurl and potential mitigations of those risks. It is by no means comprehensive, but shows classes of attacks that robust applications should consider. The Common Weakness Enumeration project at https://cwe.mitre.org/ is a good reference for many of these and similar types of weaknesses of which application writers should be aware.
To avoid these problems, never feed sensitive things to programs using command line options. Write them to a protected file and use the -K option to avoid this.
For applications that enable .netrc use, a user who manage to set the right URL might then be possible to pass on passwords.
To avoid these problems, don't use .netrc files and never store passwords in plain text anywhere.
To avoid this problem, use an authentication mechanism or other protocol that doesn't let snoopers see your password: Digest, CRAM-MD5, Kerberos, SPNEGO or NTLM authentication. Or even better: use authenticated protocols that protect the entire connection and everything sent over it.
If your application is using a fixed scheme or fixed host name, it is not safe as long as the connection is un-authenticated. There can be a man-in-the-middle or in fact the whole server might have been replaced by an evil actor.
Un-authenticated protocols are unsafe. The data that comes back to curl may have been injected by an attacker. The data that curl sends might be modified before it reaches the intended server. If it even reaches the intended server at all.
Remedies:
A redirect to a file: URL would cause the libcurl to read (or write) arbitrary files from the local filesystem. If the application returns the data back to the user (as would happen in some kinds of CGI scripts), an attacker could leverage this to read otherwise forbidden data (e.g. file://localhost/etc/passwd).
If authentication credentials are stored in the ~/.netrc file, or Kerberos is in use, any other URL type (not just file:) that requires authentication is also at risk. A redirect such as ftp://some-internal-server/private-file would then return data even when the server is password protected.
In the same way, if an unencrypted SSH private key has been configured for the user running the libcurl application, SCP: or SFTP: URLs could access password or private-key protected resources, e.g. sftp://user@some-internal-server/etc/passwd
The CURLOPT_REDIR_PROTOCOLS(3) and CURLOPT_NETRC(3) options can be used to mitigate against this kind of attack.
A redirect can also specify a location available only on the machine running libcurl, including servers hidden behind a firewall from the attacker. e.g. http://127.0.0.1/ or http://intranet/delete-stuff.cgi?delete=all or tftp://bootp-server/pc-config-data
Applications can mitigate against this by disabling CURLOPT_FOLLOWLOCATION(3) and handling redirects itself, sanitizing URLs as necessary. Alternately, an app could leave CURLOPT_FOLLOWLOCATION(3) enabled but set CURLOPT_REDIR_PROTOCOLS(3) and install a CURLOPT_OPENSOCKETFUNCTION(3) callback function in which addresses are sanitized before use.
All the malicious scenarios regarding redirected URLs apply just as well to non-redirected URLs, if the user is allowed to specify an arbitrary URL that could point to a private resource. For example, a web app providing a translation service might happily translate file://localhost/etc/passwd and display the result. Applications can mitigate against this with the CURLOPT_PROTOCOLS(3) option as well as by similar mitigation techniques for redirections.
A malicious FTP server could in response to the PASV command return an IP address and port number for a server local to the app running libcurl but behind a firewall. Applications can mitigate against this by using the CURLOPT_FTP_SKIP_PASV_IP(3) option or CURLOPT_FTPPORT(3).
Local servers sometimes assume local access comes from friends and trusted users. An application that expects https://example.com/file_to_read that and instead gets http://192.168.0.1/my_router_config might print a file that would otherwise be protected by the firewall.
Allowing your application to connect to local hosts, be it the same machine that runs the application or a machine on the same local network, might be possible to exploit by an attacker who then perhaps can "port-scan" the particular hosts - depending on how the application and servers acts.
Use of the CURLAUTH_ANY option to CURLOPT_HTTPAUTH(3) could result in user name and password being sent in clear text to an HTTP server. Instead, use CURLAUTH_ANYSAFE which ensures that the password is encrypted over the network, or else fail the request.
Use of the CURLUSESSL_TRY option to CURLOPT_USE_SSL(3) could result in user name and password being sent in clear text to an FTP server. Instead, use CURLUSESSL_CONTROL to ensure that an encrypted connection is used or else fail the request.
scp://user:pass@host/a;date >/tmp/test;
Applications must not allow unsanitized SCP: URLs to be passed in for downloads.
By default, libcurl prohibits redirects to file:// URLs.
When first realizing this, the curl team tried to filter out such attempts in order to protect applications for inadvertent probes of for example internal networks etc. This resulted in CVE-2019-15601 and the associated security fix.
However, we've since been made aware of the fact that the previous fix was far from adequate as there are several other ways to accomplish more or less the same thing: accessing a remote host over the network instead of the local file system.
The conclusion we have come to is that this is a weakness or feature in the Windows operating system itself, that we as an application cannot safely protect users against. It would just be a whack-a-mole race we don't want to participate in. There are too many ways to do it and there's no knob we can use to turn off the practice.
If you use curl or libcurl on Windows (any version), disable the use of the FILE protocol in curl or be prepared that accesses to a range of "magic paths" will potentially make your system try to access other hosts on your network. curl cannot protect you against this.
If your curl-using script allow a custom URL do you also, perhaps unintentionally, allow the user to pass other options to the curl command line if creative use of special characters are applied?
If the user can set the URL, the user can also specify the scheme part to other protocols that you didn't intend for users to use and perhaps didn't consider. curl supports over 20 different URL schemes. "http://" might be what you thought, "ftp://" or "imap://" might be what the user gives your application. Also, cross-protocol operations might be done by using a particular scheme in the URL but point to a server doing a different protocol on a non-standard port.
Remedies:
Web browsers mostly adhere to the WHATWG URL Specification.
This deviance makes some URLs copied between browsers (or returned over HTTP for redirection) and curl not work the same way. This can mislead users into getting the wrong thing, connecting to the wrong host or otherwise not work identically.
FTP is not only un-authenticated, but the setting up of the second transfer is also a weak spot. The second connection to use for data, is either setup with the PORT/EPRT command that makes the server connect back to the client on the given IP+PORT, or with PASV/EPSV that makes the server setup a port to listen to and tells the client to connect to a given IP+PORT.
Again, un-authenticated means that the connection might be meddled with by a man-in-the-middle or that there's a malicious server pretending to be the right one.
A malicious FTP server can respond to PASV commands with the IP+PORT of a totally different machine. Perhaps even a third party host, and when there are many clients trying to connect to that third party, it could create a Distributed Denial-Of-Service attack out of it! If the client makes an upload operation, it can make the client send the data to another site. If the attacker can affect what data the client uploads, it can be made to work as a HTTP request and then the client could be made to issue HTTP requests to third party hosts.
An attacker that manages to control curl's command line options can tell curl to send an FTP PORT command to ask the server to connect to a third party host instead of back to curl.
The fact that FTP uses two connections makes it vulnerable in a way that is hard to avoid.
A malicious server could cause libcurl to download an infinite amount of data, potentially causing all of memory or disk to be filled. Setting the CURLOPT_MAXFILESIZE_LARGE(3) option is not sufficient to guard against this. Instead, applications should monitor the amount of data received within the write or progress callback and abort once the limit is reached.
A malicious HTTP server could cause an infinite redirection loop, causing a denial-of-service. This can be mitigated by using the CURLOPT_MAXREDIRS(3) option.
Be sure to limit access to application logs if they could hold private or security-related data. Besides the obvious candidates like user names and passwords, things like URLs, cookies or even file names could also hold sensitive data.
To avoid this problem, you must of course use your common sense. Often, you can just edit out the sensitive data or just search/replace your true information with faked data.