Search

Recent Posts

Tags

ZSH: Simple Network Port Checker

By Dale Reagan | April 19, 2014

Ok, I had an itch – I prefer to use what I deem ‘simple’ tools to get things done – in this case I needed a simple solution for checking for open ports (i.e. port traffic is not blocked by a firewall.)  After a quick scan of the ZSH man pages I found:

Simple use of these functions is, well, simple.  I’ve become fond of ZSH after working with it for a few years – and this relative simplicity continues to entice me.

From the man zshtcpsys page: To use the system where it is available, it should be enough to ‘autoload -U tcp_open’ and run tcp_open as documented below to start a  session.

Ok, the simple sequence is:

  1. load TCP module
  2. open a tcp session
  3. close the session

Sample script:

#!/bin/zsh
autoload -U tcp_open
tcp_open localhost 80
tcp_close

Ok, if you run the above there is delay after the tcp_open command.  I prefer a quick response so shorten this to:

T_MSG=$(tcp_open localhost 80)

By ‘wrapping things up’ the ‘tcp_close’ is done for you (the tcp_close command, if still present, will announce that there are no open sessions to close…)

So my simple script becomes:

#!/bin/zsh
HOST_TO_CHECK="$1"
PORT_TO_CHECK="$2"
## load the required ZSH functions
autoload -U tcp_open
## capture the text output AND standard Error output from tcp_open
T_MSG=$(tcp_open ${HOST_TO_CHECK} ${PORT_TO_CHECK} 2>&1)
E_STAT=$? ## capture the 'result code' from the previous command
## print a summary, and remove extra lines from ${T_MSG} results
printf "${HOST_TO_CHECK} | ${PORT_TO_CHECK} | ${E_STAT} | ${T_MSG}\n" | head -1

Simple enhancements – add some print formatting:

printf “${HOST_TO_CHECK} | %5d | ${E_STAT} | ${T_MSG}\n” ${PORT_TO_CHECK} |


Save the above as /tmp/chk.port.zsh (adjust path to zsh if needed and chmod 755) and try:

for PORTS in 22 23 80 443 ; do /tmp/chk.port.zsh SYSNAME ${PORT} ; done

You should get something like:

Test_host | Port    22 | 0 | Session 1 (host Test_host, port 22 fd 3) opened OK. Setting default TCP session 1
Test_host | Port    23 | 1 | tcp_open:ztcp:174: connection failed: connection refused
Test_host | Port    80 | 0 | Session 1 (host Test_host, port 80 fd 3) opened OK. Setting default TCP session 1
Test_host | Port   443 | 1 | tcp_open:ztcp:174: connection failed: connection refused

We can clean this up a bit more by removing repetitive messages/info, i.e. with an update:

printf "${HOST_TO_CHECK} | ${PORT_TO_CHECK} | ${E_STAT} | ${T_MSG}\n" | head -1 | \
   sed -e 's/tcp_open:ztcp:174://g'"

“Wait, Wait!”, you say… “Isn’t tool XYB ‘better’ for port checking?…”
Hmm, perhaps, but the point here is that I can do ‘something’ by taking advantage of an existing resource without having to introduce yet-another-tool…

Some ‘enhancements’ you might consider would be to ‘paralellize’ this process, i.e. run N-background processes – this takes some tinkering but can speed things up quite a bit (but mind that you don’t consume all of your system resources!)

As always, I’d expect your mileage (and opinions) to vary – at least a bit. :)

Topics: Problem Solving, System and Network Security, Unix-Linux-Os | No Comments »

Simple, Elastic & Agile

By Dale Reagan | November 19, 2013

Ok, something from 50+ years ago I encountered during my AM reading -  I stumbled across this item that relates well to keeping stuff ‘simple’ (i.e. short in duration…)

The context was actually a ‘biz’ article with something like, “For meetings, most results/work occur during the ~1st 20 minutes of the meeting – the ‘rest’ of the meeting tends to be non-productive…”

My take – It suggests that during meetings, each topic/item/problem may best be resolved if/when you can limit discussing them to short durations -  agile!]

Work is “elastic” – it stretches to fill the time allotted…

Which led to:

http://en.wikipedia.org/wiki/Parkinson’s_law

Quote from Wikipedia:

First articulated by Cyril Northcote Parkinson as part of the first sentence of a humorous essay published in The Economist in 1955,[1][2] it was later reprinted together with other essays in the book Parkinson’s Law: The Pursuit of Progress (London, John Murray, 1958). He derived the dictum from his extensive experience in the British Civil Service.

The current form of the law is not that which Parkinson refers to by that name in the article. Rather, he assigns to the term a mathematical equation describing the rate at which bureaucracies expand over time. Much of the essay is dedicated to a summary of purportedly scientific observations supporting his law, such as the increase in the number of employees at the Colonial Office while Great Britain’s overseas empire declined (indeed, he shows that the Colonial Office had its greatest number of staff at the point when it was folded into the Foreign Office because of a lack of colonies to administer). He explains this growth by two forces: (1) “An official wants to multiply subordinates, not rivals” and (2) “Officials make work for each other.” He notes in particular that the total of those employed inside a bureaucracy rose by 5–7% per year “irrespective of any variation in the amount of work (if any) to be done”.

In 1986, Alessandro Natta complained about the swelling bureaucracy in Italy. Mikhail Gorbachev responded that “Parkinson’s Law works everywhere”.[3]

Corollaries In time, however, the first-referenced meaning of the phrase has dominated, and sprouted several corollaries, the most well known being the Stock-Sanford Corollary to Parkinson’s Law:

If you wait until the last minute, it only takes a minute to do.[4]

Other corollaries include (relating to computers):

Data expands to fill the space available for storage. or Storage requirements will increase to meet storage capacity.

Generalization “Parkinson’s Law” could be generalized further still as:

The demand upon a resource tends to expand to match the supply of the resource.

An extension is often added to this, stating that:

The reverse is not true.

This generalization has become very similar to the economic law of demand; that the lower the price of a service or commodity, the greater the quantity demanded.

Some define Parkinson’s Law in regard to time as:

The amount of time which one has to perform a task is the amount of time it will take to complete the task.

 

 

Topics: Problem Solving | Comments Off

Android: Working with Data structures using Python (part 1)

By Dale Reagan | August 18, 2013

 

Ok, you have an Android device (or two) and you want to explore the available data structures using Python and SL4A.  Where to start?

For this post we will assume that:

In my case, my background includes mostly programming in C (and some embedded systems work) as well as what I will loosely call ‘shell programming’.  If you explore the Android development eco-system you will find that it supports primarily Java (the main language used for Android development [via the Android SDK]) but there is also support for C programming via the Android NDK.  The SDK also includes a number of ‘shell’ tools which provide the ‘glue’ needed to communicate with Android devices via USB connections.

Enter SL4a – this solution provides a scripting interface into the Android environment via multiple scripting languages.  Once you establish your desired languages under SL4a then you have a simple/quick means to try/explore/proto-type your ideas.

SL4a provides a consistent data interface to the underlying Android (Java) data structures across it’s supported scripting languages so once understood, you could port to any desired scripting solution.  In this case I chose to work with Python since I wanted to learn a bit more about the language.  It just so happens that there is a very good book (~2011) by Paul Ferrill, Pro Android Python with SL4A, which you can purchase in print or E-book form.  With a  bit of searching, you can also locate a number of simple/working Android specific Python examples (beyond the examples found on the SL4A Wiki pages.)

After reviewing the example/sample SL4A scripts, I started my exploring how to view the data being returned via the SL4A API.  This led me to a number of Python sites as I tried to understand the data being returned.   My specific interest was to have a means to consistently display the returned data.  The structure of the data returned via function calls is consistent and contains three (3) entities:

  1. some sort of ID – appears to be somewhat random
  2. data returned by the desired query (i.e. GPS data) and
  3. an error code/status.

NOTE - before going any further you probably need to download, install, explore SL4A – otherwise, the rest of this post may not make much sense or provide any benefit.

The structure of the second item, however, will vary based on the what is being accessed.  This is where I had to dig a bit to get output that I could manipulate.  This data could be:

The introductory pages for SL4a provide this bit of info:

All SL4A API calls return an object with three fields:

  1. id: a strictly increasing, numeric id associated with the API call.
  2. result: the return value of the API call, or null if there is no return value.
  3. error: a description of any error that occurred or null if no error occurred.

I was thinking that it might be useful/interesting to explore the ‘types’ of data being returned.  Once I know that type of data then I can present it (format the output) in a consistent, easy-to-use manner – or so I thought.  Turns out this notion is a non-Phythonic approach (based on some reading, it seems that in Python the *preferred* approach is to ‘try stuff’ and then ‘ask forgiveness’ instead of figuring out what you have and then doing ‘something’.)  This is not how *I think* (at least not yet) so I venture forth using an approach that I am familiar with – so, you are advised that my Python code may be un-Pythonic – but it should still work.  I will guess that if you know *enough* about Python then deciding on how to best deal with unknown data structures would allow you to create a Pythonic solution…

JSON and Android devices

Mr. Ferrill discusses a number of relevant background topics including JavaScript Object Notation (JSON) – “a way of defining a data structure or an object in much the same way you would in the context of a program.”  He notes that *many* (ok, not all) of the API calls return information using JSON to provide structure to the data.  The Python language includes significant support for JSON data structures so there are a number of resources available (i.e. modules/documentation/examples) outside of the Android environment that can be used as resources.

Mr. Ferrill continues,  When you move a JSON object form one place to another, you must serialize and the deserialze that object.  This requires the json.load() and json.loads() functions for  decoding, and json.dump() plus json.dumps() for encoding.”  Ok, until I encountered this bit of information (which is a bit more detailed than the three items list above) I was struggling with ‘moving’ the data returned by API calls.  Refactoring the presentation a bit:

FYI – I am placing the code from this post on GitHub.

I started out doing using a function that was something like this:


 ###
 def what_print(text_msg, this_struct):
     my_type = type(this_struct).__name__
     if my_type == 'Result':
         print 'Result: ', this_struct
     else:
         if my_type == 'dict':
             print 'Dictionary: ', this_struct
         else:
             if my_type == 'list':
                 print 'List: ', this_struct
             else:
                 print 'Curious: %s', text_msg, this_struct
 

Using a *case* statement would make the above simpler, but, Python does not seem to use this approach (again, not a Pythonista…)  So, now we have a simple function to explore *results* with, as in:

  1. use API to fetch data
  2. pass the data to the preceding function
  3. review the output

Here is a simple example to read sensor data from an Android device using Python via SL4A.

import android, time
droid = android.Android()
droid.startSensingTimed(1, 250)
time.sleep(1)

## get ALL data returned by the API call
raw_data = droid.readSensors()
print "Raw Data Returned by API: ', raw_data

## now just fetch the 'results'
raw_result = droid.readSensors().result
print "Raw RESULT Data Returned by API: ', raw_result

## after adding the 'what_print' function to the code let's see what the 'data types' are
what_print('Raw Data', raw_data)
what_print('Raw Result', raw_result

Note that there are ‘other’ ways to get to the ‘parts’ of the data, i.e. using ‘indexing’:

raw_result[1] ## remember, there are three (3) parts, accessed via: 0, 1, 2
raw_result_err[2] ## just the error code/status
raw_result_id[0] ## just the ID for this data

###

So, putting the above together (and reminding you that Python is very picky about ‘spaces’ so you may see interesting results or the code may not work or only work partially if you copy/paste – best to fetch from Github – also noting that the Github source has more output refinements - added new lines and tabs…)


######### sample output
 ### What type of data do we have? ###
        Raw Data Returned by API:
        Result(id=2, result={u'light': 89, u'accuracy': 3, u'pitch': -0.17525501549243927, u'xMag': -35.628616000000001, u'azimuth': 0.85555952787399292, u'zforce': 9.8293949999999999, u'yfo
rce': 1.7405846, u'time': 1376845710.2449999, u'yMag': 28.437930999999999, u'zMag': -15.815201999999999, u'roll': -0.0094534987583756447, u'xforce': 0.092924950000000006}, error=None)

[note the three components above: Id, result, and error status..]

        Raw RESULT Data Returned by API:
        {u'light': 89, u'accuracy': 3, u'pitch': -0.17525501549243927, u'xMag': -35.628616000000001, u'azimuth': 0.85555952787399292, u'zforce': 9.8293949999999999, u'yforce': 1.7405846, u't
ime': 1376845710.2449999, u'yMag': 28.437930999999999, u'zMag': -15.815201999999999, u'roll': -0.0094534987583756447, u'xforce': 0.092924950000000006}

        What-Print | Raw Data is type: Result:
        Result(id=2, result={u'light': 89, u'accuracy': 3, u'pitch': -0.17525501549243927, u'xMag': -35.628616000000001, u'azimuth': 0.85555952787399292, u'zforce': 9.8293949999999999, u'yfo
rce': 1.7405846, u'time': 1376845710.2449999, u'yMag': 28.437930999999999, u'zMag': -15.815201999999999, u'roll': -0.0094534987583756447, u'xforce': 0.092924950000000006}, error=None)
        -----------

        What-Print | Raw Result is type: Dictionary:
        {u'light': 89, u'accuracy': 3, u'pitch': -0.17525501549243927, u'xMag': -35.628616000000001, u'azimuth': 0.85555952787399292, u'zforce': 9.8293949999999999, u'yforce': 1.7405846, u't
ime': 1376845710.2449999, u'yMag': 28.437930999999999, u'zMag': -15.815201999999999, u'roll': -0.0094534987583756447, u'xforce': 0.092924950000000006}
        -----------

Final note – tested with Android 4.3 using running Python 2.6 via SL4A

Next time I will connect this output to using JSON – as always, your mileage will vary. :)

 

Topics: Computer Technology, Hardware, Problem Solving, Sensors | Comments Off

Google errors: url(data:image/png;base64…???

By Dale Reagan | May 3, 2013

Some curious 404 errors started showing up in ~2011 – based on some digging there is an attribution to Google tools?

In your log files you see something like:

“GET /url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUg…..5CYII%3d) HTTP/1.1″   OR

“GET /some_URL_path/url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUg…..5CYII%3d) HTTP/1.1″

The above type of encoding is described on Wikipedia.com as the data URI scheme – the objective for using such encoding is to improve web server response time by reducing the number of unique server calls to fetch small (image) files; just encode them in your HTML; the idea is similar to placing all Javascript in one file.  In some cases, using this approach may increase the total number of bytes being transferred since some forms of data (i.e. images) will be much require more bytes in any text-encoded format.

According the WikiPedia link (above), data URIs are supported only for the following elements and/or attributes:

As with any solution, there are trade-offs – there are a number of security concerns with this approach – I am not convinced that benefits out-weigh concerns…  A number of encoding examples are provided on the Wikipedia link including examples for:  images, CSS, Javascript, HTML, PHP. One of the warning signs for Open Source software are if it requires the inclusion of hidden/encoded code using this approach – if you use such code you could be compromising your server(s) or your site visitors – this is a noted problem with FREE themes and plugins for various Open Source projects.

<!----- encoded data between comment markers ------->
<div id="data-uri-test-2"></div>
<style type="text/css">
#data-uri-test-2 {
 width: 180px;
 height: 180px;
 background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADgAAAAOCAYAAAB6pd%2buAAAAAXNSR0IArs4c6QAAAAZiS0dEAP8A%2fwD%2foL2nkwAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9oGAhENK17O5ogAAAAZdEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIEdJTVBXgQ4XAAAD6UlEQVRIx82WXWxTdRjGf6fndO3adbZ0VLoP9gFMXZQFNgSWDEkEYtSQkNVg4o2JH9NGJTMk6k01vTIhXshFzTCKE5NFORoXXDBs4nTMZHMzSETHDKyQyb7Xbu36dc7p8aaQZm5GNzd8rk7evOf%2fz%2fM%2bz%2f99X4E1htcn68v5742mffVRJd19uucqH539lSq3yKuHtlDmkPj99aPYe39kfRoMOqgCJHSdJNRL3AEE%2fB7h3xZFgO6JuRQdl6PE8zfRPzlF71CEojoXFc%2b9SPy3KxjCc%2bgCpIE0IilB65YWHFQBfAbUZEIDQGPA7xngDsNgMpFUY0Q0ESHHhKbkM3A9yoFqDceGQpTijWjhXxCAtC6gCWk0BAwLzqkAQsC6TJVDGcKrZdeDXp%2fcvki8zeuTH8uO6ehYzRJumxEUBUkyMa%2baUDWBVDLNnJJgNE9ixGZiOlckaQAVAWmBdTqBzqxQJ%2fD2KgrTCDzq9clywO%2fxZMi1AgcBBbhNPhyJ47TlsGuzjaHRSRRdoKq8AF3XOdvZw1BMQneUMl9iZN4eo3AmRWVwFulvqusAngBOryLBY0AcaPD65LeAFPAk0BLwe57OTnz3i4sc3ruFx2s24MwzoWgaW4tNnPn0JLt37KJ2zwGMgk5X3zd8ONJOX7mGvcK5OEGvT94HNGcs2rjSzrhUUwn4PV1AV4bcm5nwkYDfc3xhbs%2bVWQZH%2btlekc%2fDtRupKrub1uYT7NhWw9bde%2fl2REUSRR56pJT0lxofhM8xaheXVPA1oDPg9zT%2bExmsDqF8hUqmlvi%2bDUs6RWhW5Ov%2bKaxmK5XFLkIzIe7f%2fiBtwypPVZqIRWJ8Ny6x09OEJJs5rrTxn4yJY00NwRU0mtaMLY9kyL3n9clVAb%2fnley8wnyBkkIHrgILm925JGPTJONRDHqaHDQmx2a4Ph4hpFkpcZqZmBhHtbI4wYDfs3%2bNhn5bpqG03LKl1ydXAS97fXJ%2b9jv0Hq6lyK5C%2fBJ6PEjyj2nW2VQGLw5gLKqn92YSxWgjbrRy89ogVosFoyT%2bZUzcurjD65M71oDjCeDzbCIZ5VqAk9mJm9w5zAdPkRx%2bB3H6Y3Kj7TxQMkzLqfe5V71GvttFiduOa3aQc58E6JseJJXSEVhjeH2yvpxN5qVnygj%2fdJQCWxjBAOm0gVRC5MLPdoZnt2F3rsdisTAV7MBlusT3oVK6TOriCv4fIZnsSDlu1IQRNWVGV83kYKFuZzX7PQ1MFOg0j53nh%2bg8qpLg2eogeyJ53JFddDkLtyiZ6%2b%2b674Vu5cZXiIkJdAEMjnvIqzjEjVCS7rmrhOwC0Vwn58fqkIIXeL72Mn8CJn6UfKGeNt4AAAAASUVORK5CYII%3d);
}
</style>
<!------ Above sample from web logs -------------->


Encoded Red Dot Example

Red dot
Red Dot Above?


HTML Code for the Red Dot Above

<p style=”text-align: center;”><img src=”data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAABGdBTUEAALGP C/xhBQAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9YGARc5KB0XV+IA AAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAF1J REFUGNO9zL0NglAAxPEfdLTs4BZM4DIO4C7OwQg2JoQ9LE1exdlYvBBeZ7jq ch9//q1uH4TLzw4d6+ErXMMcXuHWxId3KOETnnXXV6MJpcq2MLaI97CER3N0 vr4MkhoXe0rZigAAAABJRU5ErkJggg==” alt=”Red dot” />

(Above code from Wikipedia link – check there for additional examples.)

Topics: System and Network Security, Web Problem Solving, Web Technologies | Comments Off

GeoIP origins of malicious network activity

By Dale Reagan | April 28, 2013

I have previously written that GeoIP data is not a reliable source for definitive data analysis – it is, however, a reasonable indicator.  The numbers below are from a single server (logged during the past few years) and don’t really provide any surprises.  Some things to keep in mind:

In early 2013 there have been numerous ‘news stories’ about hacking from China.   The numbers below are cumulative (based on several years of data.)  One of the interesting pieces of data (if you dig a bit) is that you find many US IP locations (GeoIP) are for ‘data centers’ (ISPs with large numbers of servers and significant IT infrastructure) that appear to be ‘hosting’ connections/domains/servers for China-based entities (as well as other from other countries) from which hacking attacks appear to be launched.  The numbers would be higher if  I did not use firewall rules (along with mod_security, mod_geoip, milter-greylist) to block access from troublesome IP space.)

GeoIP Sources of Ssh Connection Attempts – Top 10 Countries

        *********** Unique # of Countries_CNT:_124 ***********
  1.    1120 | CN
  2.     868 | US
  3.     225 | KR
  4.     181 | DE
  5.     181 | CA
  6.     151 | BR
  7.     136 | IN
  8.     108 | FR
  9.     106 | IT
 10.      99 | GB

GeoIP Sources of Ssh Connection Attempts – Top 10 Cities

        *********** Unique # of Cities_CNT:_1177 ***********
  1.     225 | Beijing
  2.     135 | Seoul
  3.     114 | Guangzhou
  4.      62 | Shanghai
  5.      54 | Taipei
  6.      49 | Hangzhou
  7.      48 | Nanjing
  8.      48 | Dallas
  9.      41 | Paris
 10.      37 | San_Antonio

Fake ‘Bots’ & GeoIP data

A related issue that I started tracking is ‘fake bots’ – web server connections that suggest they are from ‘legitimate bots’, but, when you review the IP data (GeoIP or DNS information) you will find that the bot is NOT related to the ‘bot domain’ (i.e. GoogleBot.)   The numbers below are from ‘fake’ Google Bots – my data starts in 2010:

      | # Fake | GeoIP Country
------|--------|---------------
    1 |   4145 | BR
    2 |    399 | TR
    3 |    369 | PT
    4 |     95 | ES
    5 |     85 | IT
    6 |     84 | FR
    7 |     68 | UA
    8 |     60 | MX
    9 |     50 | US
   10 |     38 | RU

Fake Google Bots per year:

So, what are these ‘fake bots’ doing?

I’m guessing that ‘fake bots’ are visiting your sites for two primary reasons:

  1. scraping your site (which is then re-published on bogus, totally automated web sites which are used to generate web traffic and earn revenue – these fake sites can  ‘pollute’ the major search engines; if your web site is supported by search engine ads then this, of course, reduces your revenue…
  2. attempting to create SEO traffic from web sites that ‘publish’ their web server logs.

Overall, the scope and sources of malicious, nefarious or ‘bad’ server traffic show no GeoIP limitations – but the data above does suggest ‘hot spots’ for the ‘bad guys’..

So what?  I suggest reviewing the GeoIP data for all of your server logs – at this point, the results should NOT be surprising; Once you identify/understand the ‘data patterns’ then you can create your own automated solution(s) to deal with these types of network/server issues. Your arsenal is unlimited but a starting point is to use/configure tools like:

And yes, I can (and do) manually ‘block’ bad IP space – but, most of what I do has been automated using standard *NIX tools – I just review the logs/system to make sure it continues to ‘work’.

As always, your mileage will vary… :)

Topics: Computer Technology, System and Network Security, Unix-Linux-Os | Comments Off


« Previous Entries

________________________________________________
YOUR GeoIP Data | Ip: 54.234.217.247
Continent: NA | Country Code: US | Country Name: United States
Region: VA | State/Region Name: Virginia | City: Ashburn
(US only) Area Code: 703 | Postal code/Zip:
Latitude: 39.043701 | Longitude: -77.487503
Note - if using a mobile device your physical location may NOT be accurate...
________________________________________________

Georgia-USA.Com - Web Hosting for Business
____________________________________