NFS mount from VPN client

I have OpenVPN set up and a client setup. From the client I can get to the VPN and, for example, ping computers on my internal (green) net.

One of the computers on the green net has some storage shared to the green net and that works no problem.

But, when I attempt to mount the share on the client the error messages says the server rejected the mount. Am I correct in guessing that is because the VPN client is on a ip 10.x.x.x and the green share is on 192.168.x.x?

Is this a setting in the VPN, in ipfire, or the computer sharing the mount? Or a combination?

does the sharing computer need to be told to allow mounts from the 10.x.x.x? What security issues does this pose?

Trying to figure out as much as possible since I really can’t test things from home.

Trouble shooting tips, further reading recommendations, etc. appreciated.

By default any client connection you create should have access to the green network unless you disabled it when you created the client connection.

You check this by pressing the pencil icon for the connection you have created in the table named Connection Status and -Control. This will open the edit window and at the bottom of that page on the left hand side there is a selection box labelled Client has access to these networks on IPFire’s site.
The entry named GREEN should be selected. If it is not then select it and press Save. Then you will need to download and install that modified client connection package on your client.

If GREEN i9s selected then you will need to check the logs to see what is happening.

In your terminal run the command

tail -f /var/log/messages | grep openvpnserver

Then run your mount command on the client and you should be able to see what happens in that log file as it occurs. Hopefully there will be some more details about what is stopping the command.

I have realised that I might have misunderstood something you wrote.

When you say the server rejected the mount do you mean that the IPFire OpenVPN server rejected the mount (my interpretation in my first reply, or do you mean that the NFS server on your green network rejected the mount?

If it is the latter then that means that you need to make sure that you also include the 10.x.x.x network into your allowed networks for the NFS Server.

Instead of making the whole of the 10.x.x.x network able to access the NFS Server you can also set up Static IP Addressing for the clients. This will give each client a fixed address on the 10.x.x.x subnet. You can then add specific IP Addresses to be allowed in your NFS Server so that only some of your clients are permitted to access the NFS Server.

Unfortunately I did not write down the message. But, it confused me as well, I look at it several times. It mentioned only the word “server”. The rest of the message noted the IP of the green machine sharing the storage.

On the NFS computer I need to add the ip of the client then in /etc/exports?

I know how to specify static Ip addresses in DHCP for the green network.

Can you point me to docs on how specify static IP for a VPN client?
My search terms have not provided much.

Hmm…GREEN is selected.

But, how could I be off site with the client, so as to be connecting via the VPN and type the tail command on the ifire machine?

Or, are you saying run it on the client?

In that case the error was because of the NFS Server. That also matches up with the fact that GREEN was selected.

Correct. You can either give them access to the whole of the nfs system or just to specific directories.

Look in this wiki page
https://wiki.ipfire.org/configuration/services/openvpn/config/static_ip
I have used this and so each of my OpenVPN roadwarriors has a static tunnel ip

Your OpenVPN roadwarrior tunnel is working from what you have said, so your client can access the IPFire ssh via a terminal on your client, in the same way as If you were in the office/home.

Then you can run the tail command on that ssh terminal.
I do that with an ssh connection that is restricted to only using certificates and not any password access. However if password access is all you currently have you can also use that in this case as the ssh tunnel will be going through the encrypted openvpn tunnel.

1 Like

Ok, I don’t remember ever trying to ssh to the firewall when “on the road” (TBH, I’m just now trying to use the VPN for more than simply obfuscating my internet traffic when on the road).

Thanks for this link, I had read it and didn’t fully grok the process… Unless there is a step-by-step instruction somewhere, it appears I need to read several pages linked by this link to develop a list of steps or checklist to do this. I’ll see what I come up with and discuss it here later.

So, to begin with this link discussed setting up client specific IP: openvpn/server.conf at master · OpenVPN/openvpn · GitHub

It notes: adding the road warrior to the ccd directory. Such a directory exists in /var/ipfire/ovpn/ccd, this doesn’t seem to match references in the various links.

What goes into what directory is done by the cgi code of the OpenVPN WUI page.

In simple steps.

  1. Stop the OpenVPN Server

  2. Press the Static IP address pools button.

  3. Choose a name for the static pool. I used road warrior pool

  4. Choose a subnet value for it. Choose a subnet different to the one you defined on the Global Settings page as that will be used for the dynamic IP Addresses. I chose 10.110.52.0/24 and then press the Add button. Then you will have a Static IP Address pool defined. Press the Back button.

  5. Press the Save Button and then Start the OpenVPN Server.

  6. If you already have some Client Configurations defined you can edit them to change them from dynamic to static. Press the pencil icon for the client configuration involved.

  7. The Client Configuration will be shown. Under Choose Network you now will have two option buttons. One labelled Dynamic OpenVPN IP address pool and the other with the name you gave the pool. In my case road warrior pool. Select that road warrior pool and press the down arrow on the drop down box under Host Address. Choose one of the entries, it doesn’t matter which, just note the whole IP Address including the netmask value, for example with my static pool I could choose 10.110.52.42/30. That will be the subnet that you need to give permission to in your NFS server.

  8. Press Save at the bottom of that page.

That client will now be given a fixed tunnel IP every time it connects to OpenVPN. You don’t have to change anything on the client after having made the above edit as the tunnel IP will be provided in the tunnel set up communication.

Hope the above helps.

1 Like

Thank you very much! That will save me a lot of flipping around URL putting together that list.

About specifying the in the /etc/exports on the NFS server.

The share is currently setup:
/ShareName 198.x.x.x/24(rw,sync)

And, assuming the road warrior was defined as 10.110.52.55/30 (the /30 being selected by default during the above setup process)

The new /etc/exports would be:
/ShareName 198.x.x.x/24(rw,sync) 10.110.52.55/30(rw)

Does it make sense to have the sync option on the road warrior share?

Do you want to mount NFS shares or just have access?

Yes, I want to mount the disk.

But, what would “just have access” mean? I could use command line tools to get/put files, but not mount.

That is correct.

sync is the default if you don’t specify anything. The alternative is async. That buffers any data changes and then does a large write of all outstanding data. The risk is if you have a power outage or a crash then all those changes will be lost.

1 Like

E.g.

Thanks. That is a lot of NFS exports file info in a succinct page.

My confusion is about what needs or needs not be included to make the share mountable or not.

Currently, (within my green net) the simple line as follows allows every computer on green net to mount the shared directories:

/SharedDir x.x.x .0(rw,sync)

I thought the question implied that some additional specification was needed for the mount to work for a VPN client.