Hi!
Unfortunately my network devices are very slow to anwser snmp get-bulk requests, causing cricket process run indefinitely. Is it possible to implemment a new feature to restrict the number of interfaces generated by mcc.py? So, I can tell to mcc.py to only generate trunk interfaces config.
Searching in source code, i've found this piece of code. I think the bold line that must be upgraded to provide this feature:
def create_interface_config(netbox, targetdir, module): """Create config for this netbox and store it in targetdir
returns: a list of containers """ LOGGER.info("Creating config for %s" % targetdir)
config = CONFIG[module] * interfaces = netbox.interface_set.select_related('netbox').filter(* * **config['filter']).distinct().order_by('ifindex')*
Until this feature can be applyed, is there any workarround that I can do to acomplish that? All of my switches trunk ports is standardized to be in the last ports (22 to 26 and 44-48)...
On Mon, 1 Jul 2013 08:27:23 -0300 Bruno Galindro da Costa bruno.galindro@gmail.com wrote:
Hi!
Unfortunately my network devices are very slow to anwser snmp get-bulk
requests, causing cricket process run indefinitely.
Hi Bruno,
AFAIK, Cricket does not use get-bulk requests at all; I would venture to guess that your devices are slow to respond to SNMP requests, period.
Is it possible to implemment a new feature to restrict the number of interfaces generated by mcc.py? So, I can tell to mcc.py to only generate trunk interfaces config.
Doing that would cause traffic collection for router ports to cease as well, and I would think those are at least as important as your VLAN trunks.
Searching in source code, i've found this piece of code. I think the
bold line that must be upgraded to provide this feature:
def create_interface_config(netbox, targetdir, module): """Create config for this netbox and store it in targetdir
returns: a list of containers """ LOGGER.info("Creating config for %s" % targetdir) config = CONFIG[module]
- interfaces = netbox.interface_set.select_related('netbox').filter(*
**config['filter']).distinct().order_by('ifindex')*
I wouldn't change that line, but rather the CONFIG dictionary of that module. If you want to exclude switch ports that are non-trunk ports, you can make a modification like this diff:
======================================================================== diff --git a/python/nav/mcc/interfaces.py b/python/nav/mcc/interfaces.py --- a/python/nav/mcc/interfaces.py +++ b/python/nav/mcc/interfaces.py @@ -26,7 +26,7 @@ 'dirname': 'ports', 'boxfilter': ~Q(category='EDGE'), 'filter': (Q(gwportprefix__isnull=False) | - Q(baseport__isnull=False) | + (Q(baseport__isnull=False) & Q(trunk=True)) | Q(ifconnectorpresent=True)), }, } ========================================================================
This would ensure that any switch port (baseport number is not null) selected for traffic data collection must also be a trunk. The filter will still include router ports and physical ports. You may also wish to exclude physical ports by removing the `Q(ifconnectorpresent=True)` criterion.
NAV includes the main IP device category "EDGE" for one reason only: Cricket would not scale to the massive amounts of switch ports installed at NTNU, so they decided to exclude all access/edge switches from traffic stats collection and created the EDGE category to mean "a switch we don't collect interface traffic counters from". This might also be an interesting option for you.
Another Cricket issue is that it works in a purely serial manner. Any delay in collecting a single value will delay the entire Cricket collection run, which must complete within 5 minutes to be on time. Cricket can be made to run partially in parallel by modifying the `subtree-sets` file and splitting the configuration trees into two or more subsets (the latest NAV version ships with a subtree-sets that defines a single "nav" set with all of Cricket's directories).
NAV's `cron.d/cricket` file can then be modified, duplicating the cricket-collector line, once for each configured subtree set, and replacing the subtree set name on each instance with one from the `subtree-sets` file. These sets will then be collected in parallel (as soon as you run `nav restart cricket` to update the actual crontab).
Until this feature can be applyed, is there any workarround that I can
do to acomplish that? All of my switches trunk ports is standardized to be in the last ports (22 to 26 and 44-48)...
I've listed a few options above. I cannot guarantee that we will make this into a configurable feature of mcc.py, as we are currently in the process of throwing out Cricket and rrdtool from NAV and replacing it with the more scalable Graphite [1] and new plugins for ipdevpoll. When these changes are released, the above issues may become moot. Collection will always run in parallel, so a delay in one device should not affect the others.
[1] http://graphite.wikidot.com/
Ok Morten, thank you for anwser my question.
After changed the code, the job is almost done. But, I need to make crickets checks in another devices too (like printers, wireless devices and some servers). So, most of these devices doesn't require tagged VLANs. So, put a Trunk=True will ignore those kind of devices. At my first e-mail, I forgot to consider this...
I was thinking that could be an option (now with cricket and after with graphite) that we could inform through config file a range of mac address vendors that must be considered by cricket/graphite. What do you think?
2013/7/2 Morten Brekkevold morten.brekkevold@uninett.no
On Mon, 1 Jul 2013 08:27:23 -0300 Bruno Galindro da Costa < bruno.galindro@gmail.com> wrote:
Hi!
Unfortunately my network devices are very slow to anwser snmp
get-bulk
requests, causing cricket process run indefinitely.
Hi Bruno,
AFAIK, Cricket does not use get-bulk requests at all; I would venture to guess that your devices are slow to respond to SNMP requests, period.
Is it possible to implemment a new feature to restrict the number of interfaces generated by mcc.py? So, I can tell to mcc.py to only generate trunk interfaces config.
Doing that would cause traffic collection for router ports to cease as well, and I would think those are at least as important as your VLAN trunks.
Searching in source code, i've found this piece of code. I think the
bold line that must be upgraded to provide this feature:
def create_interface_config(netbox, targetdir, module): """Create config for this netbox and store it in targetdir
returns: a list of containers """ LOGGER.info("Creating config for %s" % targetdir) config = CONFIG[module]
- interfaces = netbox.interface_set.select_related('netbox').filter(*
**config['filter']).distinct().order_by('ifindex')*
I wouldn't change that line, but rather the CONFIG dictionary of that module. If you want to exclude switch ports that are non-trunk ports, you can make a modification like this diff:
======================================================================== diff --git a/python/nav/mcc/interfaces.py b/python/nav/mcc/interfaces.py --- a/python/nav/mcc/interfaces.py +++ b/python/nav/mcc/interfaces.py @@ -26,7 +26,7 @@ 'dirname': 'ports', 'boxfilter': ~Q(category='EDGE'), 'filter': (Q(gwportprefix__isnull=False) |
Q(baseport__isnull=False) |
},(Q(baseport__isnull=False) & Q(trunk=True)) | Q(ifconnectorpresent=True)),
}
This would ensure that any switch port (baseport number is not null) selected for traffic data collection must also be a trunk. The filter will still include router ports and physical ports. You may also wish to exclude physical ports by removing the `Q(ifconnectorpresent=True)` criterion.
NAV includes the main IP device category "EDGE" for one reason only: Cricket would not scale to the massive amounts of switch ports installed at NTNU, so they decided to exclude all access/edge switches from traffic stats collection and created the EDGE category to mean "a switch we don't collect interface traffic counters from". This might also be an interesting option for you.
Another Cricket issue is that it works in a purely serial manner. Any delay in collecting a single value will delay the entire Cricket collection run, which must complete within 5 minutes to be on time. Cricket can be made to run partially in parallel by modifying the `subtree-sets` file and splitting the configuration trees into two or more subsets (the latest NAV version ships with a subtree-sets that defines a single "nav" set with all of Cricket's directories).
NAV's `cron.d/cricket` file can then be modified, duplicating the cricket-collector line, once for each configured subtree set, and replacing the subtree set name on each instance with one from the `subtree-sets` file. These sets will then be collected in parallel (as soon as you run `nav restart cricket` to update the actual crontab).
Until this feature can be applyed, is there any workarround that I
can
do to acomplish that? All of my switches trunk ports is standardized to
be
in the last ports (22 to 26 and 44-48)...
I've listed a few options above. I cannot guarantee that we will make this into a configurable feature of mcc.py, as we are currently in the process of throwing out Cricket and rrdtool from NAV and replacing it with the more scalable Graphite [1] and new plugins for ipdevpoll. When these changes are released, the above issues may become moot. Collection will always run in parallel, so a delay in one device should not affect the others.
[1] http://graphite.wikidot.com/
-- Morten Brekkevold UNINETT
On Sat, 6 Jul 2013 16:07:12 -0300 Bruno Galindro da Costa bruno.galindro@gmail.com wrote:
Ok Morten, thank you for anwser my question.
After changed the code, the job is almost done. But, I need to make crickets checks in another devices too (like printers, wireless devices and some servers). So, most of these devices doesn't require tagged VLANs. So, put a Trunk=True will ignore those kind of devices. At my first e-mail, I forgot to consider this...
The suggested change will only require a switch port to be trunking for it to be configured into Cricket. System statistics from devices are unaffected by this change, as are statistics from any port that isn't operating as a switch port.
I was thinking that could be an option (now with cricket and after with graphite) that we could inform through config file a range of mac address vendors that must be considered by cricket/graphite. What do you think?
I don't think we can prioritize such a feature in mcc, as we're in the process of throwing it all out.
We haven't yet put any filtering capabilities into the upcoming ipdevpoll statistics modules, but we might. Delays in single devices won't be that much of an issue for the overall statistics collection in ipdevpoll, since polling of devices occurs in parallel, and not in a serial fashion.
ipdevpoll currently has the capability of individual SNMP parameters per device, but this has not been implemented in the configuration file yet. We would need some kind of useful way of specifying single devices or groups of devices to set parameters for using the INI-style config file format.