Protect users who get spammy phishing links in emails

As you can force your users to use the system DNS server, you can force them to the non-transparent proxy ( deny HTTP(S) to destinations other than IPFire ). You can publish the proxy settings with DHCP.

1 Like

You can publish the proxy settings with DHCP.

Are they unset when they get a new lease at home? I’m guessing yes but want to be sure, if so that’s more viable than I was thinking.


This looks interesting. It is to bad it is not more user friendly to do.
No WUI. and not available as a plugin.
Using scrips is not my thing. :frowning:

+1 a wui for this would be nice

It is the defined habit for DHCP requests/answers that the attributes are set according to the environment.
The persistence is a property of the device/OS.

@anon65703081 nad @hvacguy ,
there is already a development out there which uses IPSet --> IP Address Blacklists --> https://lists.ipfire.org/pipermail/development/2020-February/007085.html but with fixed lists. Not sure when and if this will be released.

Best,

Erik

2 Likes

Right, so if it’s set in DHCP on my network, and a user goes home gets a lease on their home network, will it forget the proxy settings?

The user gets the information from the respective DHCP server. This means that when he comes home, what his dhcp specifies applies, and when he returns to the company, the settings there apply.

1 Like

Yea but the case when the DHCP server is not specifying any proxy settings, it assumes to clear the settings that are set from the last lease I guess you mean?

there is already a development out there which uses IPSet → IP Address Blacklists → IP Address Blacklists - Development - lists.ipfire.org 2 but with fixed lists. Not sure when and if this will be released.
Best,
Erik

would be nice to push, I did not know about these possibilities until today.

1 Like

I am assuming yes

1 Like

If the device is configured right. But who knows.:roll_eyes:

@anon45931963, once again which environment do you target?

@anon65703081
I think the core developers are waiting for a actual push from Tim as you can read in the topic. IPset is now available since 2016 but it is not that often used at it seems (may i am wrong with this). I think i won´t use Tim´s development even it is really great what he have done but i simply use own lists and wanted to have a more liberal choice.

@anon45931963
To go one step further with your request, have modified now the script and integrated only your list.

Script looks like this:

#!/bin/bash -

set -x

#
# Test script for individual automation for custom blacklists and their updates
# ummeegge[at]ipfire.org, $date Wed Oct 01 17:35:42 CEST 2015
# Modified for community request --> 
# https://community.ipfire.org/t/protect-users-who-get-spammy-phishing-links-in-emails/4111
# for a phising list. $date Sat Dez 19 10:42:23 CEST 2020
###########################################################################################
#

# Locations
INSTDIR="/tmp/customlists";
TMPLIST="/tmp/customlists/list";
SET="/var/ipfire/urlfilter/settings";
LIST="/var/ipfire/urlfilter/blacklists/custom/blocked/domains";

# Check for installation dir
if [ -d "${INSTDIR}" ]; then
	rm -rf ${INSTDIR};
    mkdir ${INSTDIR};
else
    mkdir ${INSTDIR};
fi

# Set hock for 'Customblacklist' in URL-Filter CGI and activate 'Custom blacklist'
HOCK=$(grep -Fx 'ENABLE_CUSTOM_BLACKLIST=on' ${SET})
if [[ -z ${HOCK} ]]; then
    sed -i -e 's/ENABLE_CUSTOM_BLACKLIST=.*/ENABLE_CUSTOM_BLACKLIST=on/' ${SET}
    squidGuard -c /etc/squidGuard/squidGuard.conf &
    sleep 2;
fi

#---------------------------------------------------------------------------------

# Change dir
cd ${INSTDIR};
## Download URLs and files for external Domain lists
URL1="https://phishing.army/download/phishing_army_blocklist_extended.txt";
FILE1="phishing_army_blocklist_extended.txt";
####################################################################################
## If you want to integrate your own local blacklist, delete the '#' in the USERLIST
## line and set the path to your list and the name.
#USERLIST="/path/to/list/nameoflist";
####################################################################################

## Download and process and/or integrate it
# Get all lists
wget ${URL1} >/dev/null 2>&1;
## Processing domains
#1
sed -e '/^#/d' -e '/^?/d' ${FILE1} > ${TMPLIST};

############################################################################################################
## if you want to integrate your own above defined local blacklist, delete the '#' in the '${USERLIST}' line
#cat ${USERLIST} >> ${TMPLIST};
############################################################################################################

# remove unwanted Carriage Returns
sed -i -e 's/^M//g' ${TMPLIST}

# sort and delete double entries and paste it to custom blacklist
sort ${TMPLIST} | uniq > ${LIST};

# Updating squidGuard database and restart and log it to syslog
squidGuard -C "${LIST}";
squidGuard -c /etc/squidGuard/squidGuard.conf &
sleep 2;
/etc/init.d/squid flush;
logger -t ipfire "URL filter: Custom blacklist updated";

# CleanUP
#rm -rf ${INSTDIR};

# EOF

URL-Filter and Squid are running including logging with a modified block page settings.
Your above posted list have:

$ wc -l customlists/phishing_army_blocklist_extended.txt 
19814 customlists/phishing_army_blocklist_extended.txt

entries.

Have executed it in debugging mode and checked the needed time:

$ time ./domain-phising-list 
+ INSTDIR=/tmp/customlists
+ TMPLIST=/tmp/customlists/list
+ SET=/var/ipfire/urlfilter/settings
+ LIST=/var/ipfire/urlfilter/blacklists/custom/blocked/domains
+ '[' -d /tmp/customlists ']'
+ rm -rf /tmp/customlists
+ mkdir /tmp/customlists
++ grep -Fx ENABLE_CUSTOM_BLACKLIST=on /var/ipfire/urlfilter/settings
+ HOCK=ENABLE_CUSTOM_BLACKLIST=on
+ [[ -z ENABLE_CUSTOM_BLACKLIST=on ]]
+ cd /tmp/customlists
+ URL1=https://phishing.army/download/phishing_army_blocklist_extended.txt
+ FILE1=phishing_army_blocklist_extended.txt
+ wget https://phishing.army/download/phishing_army_blocklist_extended.txt
+ sed -e '/^#/d' -e '/^?/d' phishing_army_blocklist_extended.txt
+ sed -i -e 's/^M//g' /tmp/customlists/list
+ sort /tmp/customlists/list
+ uniq
+ squidGuard -C /var/ipfire/urlfilter/blacklists/custom/blocked/domains
+ sleep 2
+ squidGuard -c /etc/squidGuard/squidGuard.conf
+ /etc/init.d/squid flush
Stopping Squid Proxy Server (this may take up to a few minutes)....                                    [  OK  ]
Creating Squid swap directories...                                                                     [  OK  ]
Starting Squid Proxy Server...
2020/12/19 10:37:18| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback.                [  OK  ]
+ logger -t ipfire 'URL filter: Custom blacklist updated'
./domain-phising-list  1.80s user 1.54s system 25% cpu 13.315 total

The warning can be ignored --> Squid web proxy started with 2 errors .

URL-Filter custom-blacklist:
url-filter_after_exporting

Just tested it with hxxx://102312.*.com/ which looks like this:

URL-Filter logging:

Needed resources with 5 users:

$ ./ps_mem.py | grep -E 'squid|redirect_|squidGuard'
  5.0 MiB +   1.8 MiB =   6.8 MiB	squidGuard (4)
 21.7 MiB +   3.7 MiB =  25.4 MiB	redirect_wrappe (4)
 24.7 MiB +   3.0 MiB =  27.7 MiB	squid (2)

To get a little more butter to the fish as we say it here.

Best,

Erik

EDIT: Wiki for “Web Proxy Auto-Discovery Protocol (WPAD) / Proxy Auto-Config (PAC)” --> https://wiki.ipfire.org/configuration/network/proxy/extend/wpad .

3 Likes

Thanks. I was more interested in the manageability per wui.

1 Like

Maybe this is out of place and with people working from home, everyone’s policy is different (not speaking regarding home ipfire), I rather like the approach of something much like SpamAssassin/Mimecast/Tessian to filter in line to remove, alert, or sandbox the link.

Vs DNS as these server ip can change very often or come out of multiple ips and usually on shared hosting or shared ips that can cause legitimate businesses block cases.

And continue to provide the proper training and raising awareness to end users regarding spam and phishing and how to identify them. Technology alone, no matter how expensive or how advance, at al it takes is a user to click.

Teach people to identify and look twice before responding or clicking, it it takes a few seconds. Just like crossing the street or changing lanes.

2 Likes

Hi,

to sum this discussion up a little bit, these are the treats @anon45931963 has to fight against
(as probably most of us do as well):

  1. Malicious sites (including, but not limited to phishing) hosted on dedicated IP addresses.
  2. Malicious sites hosted on CDNs or popular shared IP addresses, where blacklisting the entire IP address cannot be done as it causes too many False Positives.
  3. Malicious content hosted on legitimate FQDNs, such as Dropbox, Google Drive, or similar.

All of them have to slip through OP’s mail infrastructure, thus not being identified as malicious
in the first place.

I think we all agree on fighting against (1) is not a problem in case there is a reasonably
accurate and up to date blacklist (or several of them) available. It might need some tricks and
scripting to get that working in IPFire at the time of writing, but ultimately, it is not a problem.

In case of (2), we disagree on how to handle this (DNS filtering vs. URL filter), primarily due
to the fact that distributing proxy settings can be tricky, depending on your environment.

Case (3) is neither addressed by DNS nor URL filtering. (In theory, TLS interception could help
to see the full URL at the proxy, however, this causes other issues and I am unaware of any URL
blacklist to be accurate enough. In fact, some services for user-generated content evade URL
filtering by randomising links and putting in tokens. Worse, AV detection rates are terrible in
this scenario - partly because it is very easy to obfuscate malicious content on the web.)

Just two remarks on this not very jolly situation from my point of view:

  1. In several environments, I built something like a mail buffer: Every mail that passes the spam filter and AV scanners is duplicated and stored on a dedicated machine, where it will be deleted automatically after 24 (in some cases 48) hours. As soon as new AV signatures or blacklist updates arrive, the current mail buffer will be scanned again. In case of a detection, a message will be sent out to both the user (“please do not open message XYZ, and in case you have opened it, call ABC immediately”) and the internal monitoring system. While this is not a very elegant solution, it at least allows to detect malicious messages slipped through - which happens several times a day -, which is better than nothing. Perhaps such a setup might help OP (and others) as well.

  2. In my humble opinion, BYOD is the sore thumb of this discussion. Yes, I am aware not every company (or sometimes even agencies) is willing or can afford to provide dedicated notebooks/PCs to their employees, however, I would strongly recommend to try changing that first. Enforcing policies is much easier if the company has more or less full control over the employees device, and enables defense mechanisms on a completely different level (such as HIPS or USB port monitoring). Pushing proxy settings to clients is usually much easier as well.

Perhaps this might be helpful as food for thoughts. :slight_smile:

Thanks, and best regards,
Peter Müller

3 Likes

Hi all -

As a side note I just wanted to say thank you to all participants for this very interesting discussion that really adds a lot of valuable points to my consideration of how I want to use IPFire in my environment to protect grown-ups, and kids – all people with special data protection needs that have to be addressed appropriately. This is the reason why I read (an write) in the forum on an almost daily base. Thanks again and take care!

Datamorgana

4 Likes

Don’t get me wrong, I’ve got Barracuda ESS in my stack. DNS for the ones that get through, just don’t like relying on one thing alone. And also when you get links from google search. No mail filter got you covered in that case.

@ummeegge looks good, I’ll have to do some testing with sending proxy over DHCP and URL-Filter (mostly to ensure it’s not sticky and breaks internet when they go home). I’m defs not going down the path of IPSets in iptables tho.

Do you have a cron job set to run the script periodically or something like that? As the list gets updated every 6 hours. Actually looking at the script, I imagine you’d only be wanting to run the script out of hours as it downs squid lol.

Hi @anon45931963

yes /etc/fcron.* might be useful for that or you can also use fcrontab -e if it should be more specific. According to the update sequence, if you not sure how often/when the list is updated you can make a check via e.g.:

curl -Ls https://phishing.army/download/phishing_army_blocklist_extended.txt | sha1sum
...
if [ "${ACTVERSION}" != "${INSTALLEDVER}" ]; then
...

and compare via a checksum (sha1 in that case) if something has happened and if, update it. The downtime is not that long, you can see it above.

Some more infos for you.

Best,

Erik

1 Like