Make sure all URLs are relative on intake and execute#46528
Make sure all URLs are relative on intake and execute#46528joelgriffith merged 30 commits intoelastic:masterfrom joelgriffith:sec/disallow-file-ip-requests
Conversation
|
Pinging @elastic/kibana-security |
| }); | ||
|
|
||
| // This really should never happen, but if we're tricked | ||
| // into loading anything from here something very fishy is going on |
There was a problem hiding this comment.
puppeteer doesn't yet let you block responses, and since this can be a pretty bad situation, the job outright fails and closes down. I'm not sure if there's a better thing we can be doing here, but this is the best I can come up with given the API puppeteer currently has.
💚 Build Succeeded |
💔 Build Failed |
💔 Build Failed |
|
This seems to handle the initial URL being bogus. If there is an XSS in a viz/dashboard that reporting tries to screenshot, it could redirect to something malicious still, though? |
yarn.lock
Outdated
| resolved "https://registry.yarnpkg.com/is-absolute-url/-/is-absolute-url-3.0.1.tgz#e315cbdcbbc3d6789532d591954ac78a0e5049f6" | ||
| integrity sha512-c2QjUwuMxLsld90sj3xYzpFYWJtuxkIn1f5ua9RTEYJt/vV2IsM+Py00/6qjV7qExgifUvt7qfyBGBBKm+2iBg== | ||
|
|
||
| is-absolute-url@^3.0.2: |
There was a problem hiding this comment.
This dependency seems to not be great at its job: https://github.com/sindresorhus/is-absolute-url/blob/master/index.js
/^[a-z][a-z\d+\-.]*:/.test('http://attacker.com')
true
/^[a-z][a-z\d+\-.]*:/.test('Http://attacker.com')
false
alexbrasetvik
left a comment
There was a problem hiding this comment.
Seems to outsource the problem to a dependency that's trivial to bypass by just having an uppercase character in the protocol.
💚 Build Succeeded |
💔 Build Failed |
💚 Build Succeeded |
|
Looks like my gateway logic is working -- but not in a good way :) |
💚 Build Succeeded |
|
@kobelb @tsullivan thanks for hanging with me on this one, ready for final pass |
|
@joelgriffith I think my comment above might've been buried: #46528 (comment) |
| protocols?: string[]; | ||
| ips?: string[]; | ||
| } | ||
|
|
There was a problem hiding this comment.
This is my network-policy opus, you can do anything in terms of allowing/denying requests, including OR-ing a particular filter (see tests).
|
@tsullivan and @kobelb, after bending my mind to get the implementation to work properly (and adding a huge chunk of tests) I think this is finally finally finally ready. Please see the test file for example usage. It's incredibly flexible in what you can do with it. |
💚 Build Succeeded |
| } | ||
|
|
||
| const isHostMatch = rule.hosts | ||
| ? _.some(rule.hosts, host => (parsed.host || '').endsWith(host)) |
There was a problem hiding this comment.
Using .endsWith seems like it could cause issues. For example, if a system administrator allows traffic to a-secure-kibana.com, we shouldn't also be allowing traffic to not-a-secure-kibana.com:
it('blocks traffic to not-a-secure-kibana.com', () => {
const rules = [
{ allow: true, hosts: ['a-secure-kibana.com']},
{ allow: false, hosts: ['not-a-secure-kibana.com']},
];
expect(allowRequest({ url: 'https://a-secure-kibana.com', ip: null }, rules)).toBe(true);
expect(allowRequest({ url: 'https://not-a-secure-kibana.com', ip: null }, rules)).toBe(false);
});
This could be resolved by changing the ordering of the rules, but I could see users tripping on this rather easily.
| const isIPMatch = rule.ips | ||
| ? request.ip | ||
| ? _.some(rule.ips, ip => ip === request.ip) | ||
| : rule.allow |
There was a problem hiding this comment.
I'm not sure I'm following why the isIPMatch logic differs from all of the others and uses the rule.allow explicitly.
There was a problem hiding this comment.
It's because we don't necessarily know the IP (at request time), so there's a weird fork in how it's handled with regards to allow/deny
There was a problem hiding this comment.
I'll add a test case for this
💔 Build Failed |
| return isRuleMatch ? rule.allow : undefined; | ||
| }, undefined); | ||
|
|
||
| return typeof allowed === 'undefined' || allowed; |
There was a problem hiding this comment.
| return typeof allowed === 'undefined' || allowed; | |
| return typeof allowed === 'undefined' ? false : allowed; |
When the user has defined rules, but none of them match, it'd be safest for us to deny the request.
I'd recommend adding this test as well:
it('denies requests when no rules match', () => {
const url = 'https://not-kibana.com/cool/route/bro';
const rules = [{ allow: true, host: 'kibana.com' }];
expect(allowRequest(url, rules)).toEqual(false);
});
| expect(allowRequest('https://kibana.com/cool/route/bro', [])).toEqual(true); | ||
| }); | ||
|
|
||
| it('allows requests when no rules match', () => { |
There was a problem hiding this comment.
nit: technically, this rule matches everything.
| expect(allowRequest(url, rules)).toEqual(true); | ||
| }); | ||
|
|
||
| it('denies requests when no rules match', () => { |
| { allow: true, protocol: 'ws:' }, | ||
| { allow: true, protocol: 'wss:' }, | ||
| { allow: true, protocol: 'data:' }, | ||
| { allow: false }, // Default action is to deny! |
💚 Build Succeeded |
💚 Build Succeeded |
* Fail file-protocol requests and bogon IP * Revert "Fail file-protocol requests and bogon IP" This reverts commit 8a1ff56. * Ensuring all URLs from PDF and PNG reports are relative * Missing dep, saw it in a prior yarn.lock * Tighenting our URL checks * More edgecases that can be triggered with a window.goto type behavior * Javascript URLs * Tightening implementation * New networkPolicy that allows for setting allow/deny list for chromium requests * Fixing tests, always fail file:// URLs * Never allow file responses * Make sure we test other protocols in deny list * Don't allow `file:` protocols in the allow-list * Expanding upon network policy to match ufw-style patterns * Applying network policies to outbound and inbound requests * Fixing gateway logic on network-policy * My network-policy opus * Moving to more explicit ufw format * Updating snapshots * Default reject requests when enabled and no rule matches
|
#48161 went swimmingly, however 7.4 has been pretty gnarly to backport. I'm going to likely just implement the fix directly onto 7.4 given the number of non-trivial merge conflicts. |
This ensures all URLs are relative upon job creation as well as during execution (just in case someone write one to the index manually)