Squid Problems - Core 174

hi,

today we have updatet to Core 174 and we activeted squid. But after a few minutes Squid “Teminated Abnormally”

Here Some cache.log


Maximum Resident Size: 81312 KB
Page faults with physical i/o: 15
2023/04/19 08:58:49 kid1| Current Directory is /srv/web/ipfire/cgi-bin
2023/04/19 08:58:49 kid1| Starting Squid Cache version 5.8 for x86_64-pc-linux-gnu...
2023/04/19 08:58:49 kid1| Service Name: squid
2023/04/19 08:58:49 kid1| Process ID 4151
2023/04/19 08:58:49 kid1| Process Roles: worker
2023/04/19 08:58:49 kid1| With 4096 file descriptors available
2023/04/19 08:58:49 kid1| Initializing IP Cache...
2023/04/19 08:58:49 kid1| DNS Socket created at 0.0.0.0, FD 7
2023/04/19 08:58:49 kid1| Adding domain ip-web.de from /etc/resolv.conf
2023/04/19 08:58:49 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2023/04/19 08:58:49 kid1| helperOpenServers: Starting 2/10 'asnbl-helper.py' processes
2023/04/19 08:58:49 kid1| Logfile: opening log stdio:/var/log/squid/access.log
2023/04/19 08:58:49 kid1| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
2023/04/19 08:58:49 kid1| Store logging disabled
2023/04/19 08:58:49 kid1| Swap maxSize 0 + 8117248 KB, estimated 624403 objects
2023/04/19 08:58:49 kid1| Target number of buckets: 31220
2023/04/19 08:58:49 kid1| Using 32768 Store buckets
2023/04/19 08:58:49 kid1| Max Mem  size: 8117248 KB
2023/04/19 08:58:49 kid1| Max Swap size: 0 KB
2023/04/19 08:58:49 kid1| Using Least Load store dir selection
2023/04/19 08:58:49 kid1| Current Directory is /srv/web/ipfire/cgi-bin
2023/04/19 08:58:49 kid1| Finished loading MIME types and icons.
2023/04/19 08:58:49 kid1| HTCP Disabled.
2023/04/19 08:58:49 kid1| Squid plugin modules loaded: 0
2023/04/19 08:58:49 kid1| Adaptation support is off.
2023/04/19 08:58:49 kid1| Accepting HTTP Socket connections at conn10 local=192.168.1.1:800 remote=[::] FD 13 flags=9
2023/04/19 08:58:49 kid1| Accepting NAT intercepted HTTP Socket connections at conn12 local=192.168.1.1:801 remote=[::] FD 14 flags=41
Apr 19 08:58:49 squid-asnbl-helper[4147] WARN: No ASNBL configured. This is acceptable as long as this script is configured to do anything, you just have been warned...
Apr 19 08:58:49 squid-asnbl-helper[4152] WARN: No ASNBL configured. This is acceptable as long as this script is configured to do anything, you just have been warned...
Apr 19 08:58:49 squid-asnbl-helper[4153] WARN: No ASNBL configured. This is acceptable as long as this script is configured to do anything, you just have been warned...
Traceback (most recent call last):
  File "/usr/bin/asnbl-helper.py", line 493, in <module>
    resolvedasn = resolve_asn(singleip, ASNDB)
  File "/usr/bin/asnbl-helper.py", line 193, in resolve_asn
    if not result.asn:
UnboundLocalError: local variable 'result' referenced before assignment
2023/04/19 08:58:50 kid1| WARNING: external_acl_type #Hlpr1 exited
    current master transaction: master53
2023/04/19 08:58:50 kid1| Too few external_acl_type processes are running (need 1/10)
    current master transaction: master53
2023/04/19 08:58:50 kid1| ERROR: The external_acl_type helpers are crashing too rapidly, need help!
    current master transaction: master53
2023/04/19 08:58:50 kid1| Starting new helpers
    current master transaction: master53
2023/04/19 08:58:50 kid1| helperOpenServers: Starting 1/10 'asnbl-helper.py' processes
    current master transaction: master53
Traceback (most recent call last):
  File "/usr/bin/asnbl-helper.py", line 493, in <module>
    resolvedasn = resolve_asn(singleip, ASNDB)
  File "/usr/bin/asnbl-helper.py", line 193, in resolve_asn
    if not result.asn:
UnboundLocalError: local variable 'result' referenced before assignment
2023/04/19 08:58:50 kid1| WARNING: external_acl_type #Hlpr2 exited
    current master transaction: master60
2023/04/19 08:58:50 kid1| Too few external_acl_type processes are running (need 1/10)
    current master transaction: master60
2023/04/19 08:58:50 kid1| Closing HTTP(S) port 192.168.1.1:800
    current master transaction: master60
2023/04/19 08:58:50 kid1| Closing HTTP(S) port 192.168.1.1:801
    current master transaction: master60
2023/04/19 08:58:50 kid1| storeDirWriteCleanLogs: Starting...
    current master transaction: master60
2023/04/19 08:58:50 kid1|   Finished.  Wrote 0 entries.
    current master transaction: master60
2023/04/19 08:58:50 kid1|   Took 0.00 seconds (  0.00 entries/sec).
    current master transaction: master60
2023/04/19 08:58:50 kid1| FATAL: The external_acl_type helpers are crashing too rapidly, need help!
   current master transaction: master60
2023/04/19 08:58:50 kid1| Squid Cache (Version 5.8): Terminated abnormally.
    current master transaction: master60
CPU Usage: 0.052 seconds = 0.031 user + 0.020 sys

if we deactivate the fast-flux detection it runs.
We use Squid only in transperent mode without url-filter, update accellerator clamav… etc.

Is there a way that it runs with the detection enabled ?

Can you please confirm that this is a typo and you have actually updated to Core 174.

Core Update 175 is still in unstable.

A bug is already in place on this and a fix patch has been submitted but is not yet merged into the next branch.
https://bugzilla.ipfire.org/show_bug.cgi?id=13023

1 Like

its core 174 sorry
how can i apply this patch

The patch as submitted is intended to be applied to the IPFire build system and not directly to a running system.

The portion that relates to patching the asnbl-helper.py file has to be extracted and saved as a file.

I have extracted the relevant part of the changes in the following section.

--- squid-asnbl-0.2.4/asnbl-helper_orig.py
+++ squid-asnbl-0.2.4/asnbl-helper.py
@@ -172,17 +172,19 @@
     return parsedasns
 
 
-def resolve_asn(ipaddr: str, asndb):
-    """ Function call: resolve_asn(IP address to be resolved,
-                                   ASN database instance object)
-    This function looks up the Autonomous System for the given IP address. It expects
-    an IPFire location database object to be passed as a second parameter, hence relying
-    on another function to set that up. """
+def resolve_asn(ipaddr: str):
+    """ Function call: resolve_asn(IP address to be resolved)
+    This function looks up the Autonomous System for the given IP address. """
+
+    # Fix for #13023
+    # Initialize the result variable before it's first use, otherwise python3
+    # will sometimes detect a 'mismatch' using global and local variables
+    lookup_result = None
 
     # libloc cannot handle ipaddress objects here, so casting into a string is necessary
     # for good measure, to avoid exceptions here...
     try:
-        result = asndb.lookup(str(ipaddr))
+        lookup_result = ASNDB.lookup(str(ipaddr))
     except BlockingIOError:
         # XXX: Prevent likely libloc bug from causing this helper to crash
         # (see upstream bug https://bugzilla.ipfire.org/show_bug.cgi?id=13023)
@@ -190,21 +192,25 @@
 
     # In case nothing was returned above, satisfy result expectation to this function...
     try:
-        if not result.asn:
+        if not lookup_result.asn:
             return 0
     except AttributeError:
         return 0
 
-    return result.asn
-
-
-def asndb_response_tests(testdata: str, asndb):
-    """ Function call: asndb_response_tests(response rest data,
-                                            ASN database instance object)
+    return lookup_result.asn
+
+
+def asndb_response_tests(testdata: str):
+    """ Function call: asndb_response_tests(response rest data)
 
     This function asserts the given ASN database to return expected ASNs for
     given IP addresses in order to be considered operational. It returns
     True if this test succeeds, and False otherwise. """
+
+    # Fix for #13023
+    # Initialize the result variable before it's first use, otherwise python3
+    # will sometimes detect a 'mismatch' using global and local variables
+    lookup_result_test = None
 
     tresult = True
 
@@ -216,13 +222,13 @@
 
     for stestdata in ptdata:
         LOGIT.debug("Running response test for '%s' against ASNDB '%s' ...",
-                    stestdata, asndb)
-
-        returndata = resolve_asn(stestdata[0], asndb)
-
-        if returndata != int(stestdata[1]):
+                    stestdata, ASNDB)
+
+        lookup_result_test = resolve_asn(stestdata[0])
+
+        if lookup_result_test != int(stestdata[1]):
             LOGIT.error("Response test failed for ASNDB '%s' (tuple: %s), aborting",
-                        asndb, stestdata)
+                        ASNDB, stestdata)
             tresult = False
             break
 
@@ -428,7 +434,7 @@
 ASNDB = set_up_location_database(config["GENERAL"]["ASNDB_PATH"])
 
 LOGIT.debug("Running ASN database response tests...")
-if asndb_response_tests(config["GENERAL"]["TESTDATA"], ASNDB):
+if asndb_response_tests(config["GENERAL"]["TESTDATA"]):
     LOGIT.debug("ASN database operational - excellent. Waiting for input...")
 else:
     LOGIT.error("ASN database response tests failed, aborting")
@@ -490,7 +496,7 @@
     ASNS = []
     for singleip in IPS:
         # Enumerate ASN for this IP address...
-        resolvedasn = resolve_asn(singleip, ASNDB)
+        resolvedasn = resolve_asn(singleip)
 
         # In case protection against destinations without public AS announcements for their
         # IP addresses is desired, the query will be denied in case ASN = 0 appears in an

copy and paste this section into a file which can be named what you will, for example asnbl-helper.patch
Make sure that if you are using a windows editor that it is configured to not add CR/LF’s at the end of every line.

Then transfer that file to your IPFire system. Simplest place to locate it is in /tmp/

Then from the IPFire command line run the following command.

patch -b /usr/bin/asnbl-helper.py -i /tmp/asnbl-helper.patch
The -b tells patch to make a backup copy of the file before patching with .orig as the suffix. If something goes wrong then you can revert back to the original file.
The first path/file name is the file to be patched.
The -i tells patch to use the patch file defined by the following path/file name.

Don’t run the command more than once without reverting the file back to the original version otherwise the .orig file will be overwritten.

After patching it is probably worth restarting the web proxy although I think it should work directly as is.

I also tested this sequence out by extracting asnbl-helper.py from my IPFire machine to another location and running the patch as defined and I confirmed that it worked successfully.

4 Likes

thanks it seems to slove the problem. itt runings stable for some ours
we only have some dns errors from microsoft in cache.log:

Current master transaction: master178
Apr 20 13:31:58 squid-asnbl-helper[10205] WARN: Destination ‘dns.msftncsi.com’ resolves to IP addresses ‘fd3e:4f5a:5b81::1’ without corresponding ASN, probably selectively announced
Apr 20 13:31:58 squid-asnbl-helper[10205] INFO: Denying access to destination ‘dns.msftncsi.com’ due to suspected selective announcements
2023/04/20 13:39:34 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master465
2023/04/20 13:39:34 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master465
Apr 20 13:39:44 squid-asnbl-helper[10205] INFO: Unable to resolve A/AAAA record of queried destination ‘web.vortex.data.microsoft.com’, returning ERR…
2023/04/20 13:40:57 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master600
2023/04/20 13:40:57 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master600
2023/04/20 13:45:30 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master888
2023/04/20 13:45:30 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master888
Apr 20 13:50:12 squid-asnbl-helper[10205] INFO: Unable to resolve A/AAAA record of queried destination ‘pti.store.microsoft.com’, returning ERR…
2023/04/20 14:11:01 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master1373
2023/04/20 14:11:01 kid1| DNS error while resolving ipv6.msftconnecttest.com: No valid address records
current master transaction: master1373

otherwise everything works as it should

thanks

Good to hear.

I would expect that the fix will end up in Core Update 175.

I am at 172. So it is advisable to wait for 175, right?
Is it a vulnerability to stay at 172? Everything is working fine and I don’t want to cause a failure unnecessarily.

The problem noted in this bug does not affect everyone.

The original bug reporter had the problem with Core Update 172, others with Core Update 173 or 174.

I haven’t experienced the problem at all on any of those Core Update versions.

Looking through the Release announcements for Core Update 173 and 174 there are some security fixes in some programs such as apache, curl and strongswan but they are not considered exploitable in normal IPFire operation. There are a lot of bug fixes in the combination of those two Core Updates.

You will need to make your own decision on waiting for Core Update 175 or not.

The other option is to make a backup and store it off of IPFire and then do the update to CU174 and see how things go. If you finds some problems you can always do a fresh installation of CU172 and restore the backup. It is not a long job. Just record info such as the mac address for each interface so on re-install the appropriate interface colour can be assigned to the right mac address. Same with the chosen IP’s for each interface.

2 Likes

It exactly started after “a massive patchset for updating Python” was integrated in CU172 as stated in the blog post. :face_with_hand_over_mouth: :hugs: :+1:

So since Python was updated from V2 to V3. This changed the way how Python detect local variables and the false error was happening on all of our supported firewalls until we created the patch.

I wouldn’t suggest to stay at 172, I would upgrade to 175 and manually apply the patch. With the patch squid is running stable for over 1 1/2 month on all our affected systems again without a single shutdown.

3 Likes

I am going off on a small tangent here:

First, thank you for the info about the patch command!

Second, On a running system I tried applying a patch without removing anything. And it seemed to work A-OK. It seemed to ignore From(s), Date, Subject, diff line(s) and the numbers at the bottom of the files.

2 Likes

Good catch - you are quite correct.

In this case I did not need to extract anything as there was only one patch in the file and the command I gave over ruled the location definition in the diff line.

You can also end up with a patch file that has two or more different diff sections in it that apply to different locations. Then you would need to extract each part separately as the patch command I gave can only have one location for the patch to be applied.
This is based on the patch being for the build directory structure and you want to apply the patch to an installation directory structure.