Jeśli jesteś właścicielem tej strony, możesz wyłączyć reklamę poniżej zmieniając pakiet na PRO lub VIP w panelu naszego hostingu już od 4zł!

Archive for the ‘Zend and PHP’ Category

Pinglist – Uptime And Performance Monitoring Done Right

Pinglist – Uptime And Performance Monitoring Done Right

I would like to to introduce a project I have been working on for a better part of last year in my free time. Pinglist is an uptime and performance monitoring platform currently consisting of a web app and a mobile iOS app. Though I am planning to release an Android version this year.

It allows you to set alarms to watch HTTP endpoints. You can view the status of endpoints or websites you are monitoring and receive push notification alerts whenever our platform detects a downtime or other issue. Very useful to keep an eye on your servers anywhere you are.

Pinglist app screenshot

Features

  • Receive push notification alerts to your iPhone
  • View the status of all monitored endpoints or websites
  • View uptime and performance metrics of all endpoints or websites
  • View current and past incidents (outages, slow responses etc)
  • Create teams and manage team members
  • Update your profile

You can see it on Itunes here.

Feel free to try out the app. It should be quite useful for dev ops engineers, sysadmins as well as software engineers working on web services in general.

There is a subscription based model which allows you to define more alarms as well as unlock team management feature. If you buy a startup or business subscription, you can define teams of people who share your subscription limits.

The backend platform is written using Golang and deployed in a completely automated way to AWS using Ansible and Terraform. I am especially proud of clean and robust API design I perfected over a several months which should allow it to scale very nicely.

 

Source: http://blog.richardknop.com/2016/05/pinglist-uptime-and-performance-monitoring-done-right/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Codeship Go 1.5

Codeship Go 1.5

Codeship virtual machines used for builds currently have Go 1.4 installed. It’s quite easy to upgrade to 1.5 though. Just use the following setup commands:

export GO_ARCHIVE=go1.5.2.linux-amd64.tar.gz
wget https://storage.googleapis.com/golang/$GO_ARCHIVE
chmod +x $GO_ARCHIVE
tar -C $HOME -xzf $GO_ARCHIVE
export PATH=$HOME/go/bin:$PATH
export GOROOT=$HOME/go
go version

Codeship Go 1.5

Source: http://blog.richardknop.com/2015/12/codeship-go-1-5/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Machinery: Celery for Golang

Machinery: Celery for Golang

I haven’t written anything here for a long time. Just a quick update.

I have been working on an interesting open source project for last couple of weeks. It is an asynchronous task queue/job queue based on distributed message passing written in Golang. I called it Machinery.

While writing initial MVP of Machinery, I was mainly inspired by Celery which is one of my favourite Python libraries.

MVP supports RabbitMQ message broker and two result backends: AMQP and Memcache. You can send asynchronous tasks, set success and error callbacks and also create chains of tasks to be executed one by one.

The GitHub repository has a comprehensive README file which covers installation, configuration and usage. There are also useful examples you can take a look at.

I hope to continue working on this project and eventually make it production ready. Contributions are be very welcome.

Source: http://blog.richardknop.com/2015/05/machinery-celery-for-golang/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Celery workflows

Celery workflows

Celery is a great asynchronous task/job queue framework. It allows you you create distributed systems where tasks (execution units) are executed concurrently on multiple workers using multiprocessing. It also supports scheduling and scales really well since you can horizontally scale workers.

Celery is great at firing both synchronous and – which is one of its main strengths – asynchronous tasks such as email sending, processing of credit cards, writing transactions to a general ledger.

However, Celery offers much more. One of its most useful features is an ability to chain multiple tasks to create workflows.

Task Callbacks

Let’s create few simple tasks for demonstration purposes:

from celery import shared_task

@shared_task
def add(x, y):
    return x + y

@shared_task
def multiply(x, y):
    return x * y

@shared_task
def tsum(numbers):
    return sum(numbers)

A very simple example of linking two tasks would be:

add.apply_async((5, 5), link=add.s(35))

Which would result in:

(5 + 5) + 35

You can also define an error callback. Let’s create a simple error handling task:

@shared_task
def error_handler(uuid):
    result = AsyncResult(uuid)
    exc = result.get(propagate=False)
    print('Task {0} raised exception: {1!r}\n{2!r}'.format(
          uuid, exc, result.traceback))

You could then write:

add.apply_async((5, 5), link_error=error_handler.s())

This is useful to send an email notifying of system error or for logging exceptions for later debugging.

Both callbacks and error callbacks can be expressed as a list:

add.apply_async((5, 5), link=[add.s(35), multiply.s(2)])

The result from the first task would then be passed to two callbacks so you would get:

(5 + 5) + 35

and

(5 + 5) * 2

If you don’t want to pass the result from first task to its callback, you can create an immutable callback. This can be useful when you have a piece of logic you want to execute after the task but do not need its return value.

add.apply_async((2, 2), link=multiply.si(4, 4))

Next, let’s look at some more complex workflow primitives Celery offers.

The Primitives

First primitive I will show you is group. Groups are used when you want to execute any number of tasks in parallel.

from celery import group
result = group(add.s(i, i) for i in xrange(10))()
result.get(timeout=1)

Would result in a list of results:

[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

Next primitive is a chain. Chain defines a set of tasks to be executed one after another in a synchronous matter.

result = (multiply.s(5, 5) | add.s(4) | multiply.s(8))()
result.get()

Would give you equivalent of:

((5 * 5) + 4 ) * 8 = 29 * 8

Another very useful primitive is a chord. Chord let’s you define a header and a body. Header is a list of tasks to be executed in parallel, body is a callback to be executed after all tasks in the header have run. The callback in body will receive a list of arguments representing return values of all tasks in the header.

from celery import chord

result = chord((add.s(i, i) for i in xrange(10)), tsum.s())()
result.get()

Would result in [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] being passed to tsum task which would add all numbers together giving 90 as a result, basically:

sum([0, 2, 4, 6, 8, 10, 12, 14, 16, 18])

There are couple more primitives. Map and starmap work similar to the built in Python map function.

tsum.map([range(10), range(100)])

Will result in:

[45, 4950]

Starmap allows you to send arguments as *args:

add.starmap(zip(range(10), range(10)))

Will result in:

[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

Chunks let you split a long list of arguments into subsets, resulting in a task called multiple times with a smaller chunk of arguments.

# array of 1000 tuple pairs [(1, 1), (2, 2), ..., (999, 999)]
items = zip(xrange(1000), xrange(1000))
add.chunks(items, 10)

This was a very short introduction to workflows in Celery. There is much more flexibility in defining workflows. I haven’t really properly touched error handling and lots of different options you have.

There are some limitations as well though. For instance, I chaining chords together is not possible as far as I know.

Finally, let me say that Celery is an essential part of every Python programmer’s repertoire. If you haven’t used it yet, you should definitely take a look.

It can be used from simple use cases such as asynchronous charging of credit cards and sending emails in the background to more sophisticated stuff like workflows or even as a middleware in service oriented architectures.

Source: http://blog.richardknop.com/2014/07/celery-workflows/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Using Celery as middleware in SOA

Using Celery as middleware in SOA

When creating a service orientated architecture, one of the most important decisions to make is usually what protocol to use for inter service communication. Let’s say an architecture consists of two layers:

  • edge
  • application

The edge is just a thin publicly accessible HTTP layer exposing RESTful endpoints. This is usually just a server with Nginx reverse proxy (or multiple servers behind a load balancer) and something to route URL requests and make calls to services that handle all business logic.

The application layer contains all business logic. It usually consists of several services (login service, wallet service, payment service etc) deployed in a VPC (virtual private cloud).

The question is, how should the edge communicate with different services? And how should the services communicate between each other?

A simple and very common solution is for the services to expose their own RESTful APIs so the edge server can trigger service calls via standard HTTP.

One of the drawbacks of this solution is that the edge server and all application servers now need to know additional configuration (DNS, IP addresses etc). Also, the services need to be deployed behind their own load balancers in order to scale and they need to run their own web server (Nginx and an application server, something like UWSGI or Green Unicorn).

I prefer to use AMQP protocol instead. This solution solves the above mentioned problems. For example, I am using RabbitMQ and Celery in my latest project as a middleware. I have two RabbitMQ clusters:

  • one for synchronous blocking tasks
  • one for asynchronous non blocking tasks

All my services are running as Celery deamons on application servers. There is no need for complex configuration (only URL of the message broker is needed) and no need to run a web server. Also, load balancers are no longer needed anymore as RabbitMQ cluster uses round robin to distribute messages to available workers.

Edge server only publishes messages to the correct queue. To make this work as expected, I wrote a clever routing Celery configuration which routes tasks to correct queues.

I am using a separate exchange for each service. Each service is running as two Celery deamons (sync and async) to group synchronous and asynchronous tasks together. Here is how it’s done:

from celery import Celery
import re

# http://docs.celeryproject.org/en/latest/userguide/routing.html
class Router(object):
    def route_for_task(self, task, args=None, kwargs=None):
        parts = task.split('.')
        if re.match(r'^mp[a-z_]+\.sync\.[a-z_]+$', task) is not None:
            return {
                'routing_key': task,
                'queue': parts[0] + '.sync',
            }
        elif re.match(r'^mp[a-z_]+\.async\.[a-z_]+$', task) is not None:
            return {
                'routing_key': task,
                'queue': parts[0] + '.async',
            }
        return None

def _get_celery_queues():
    services = [
        'mplogin',
        'mpwallet',
        'mpledger',
    ]

    queues = {}
    for service in services:
        queues[service + '.sync'] = {
            'binding_key': service + '.sync.#',
            'exchange': service,
            'exchange_type': 'topic',
            'delivery_mode': 1, # transient messages, not written to disk
        }
        queues[service + '.async'] = {
            'binding_key': service + '.async.#',
            'exchange': service,
            'exchange_type': 'topic'
        }

    return queues

class CeleryConfig(object):
    CELERY_ROUTES = (Router(),)

    #: Only add pickle to this list if your broker is secured
    #: from unwanted access (see userguide/security.html)
    CELERY_ACCEPT_CONTENT = ['pickle', 'json']
    CELERY_TASK_SERIALIZER = 'pickle'
    CELERY_RESULT_SERIALIZER = 'pickle'
    CELERY_TIMEZONE = 'UTC'
    CELERY_ENABLE_UTC = True
    CELERY_BACKEND = 'amqp'

    # Replicate queues to all nodes in the cluster
    CELERY_QUEUE_HA_POLICY = 'all'

    # http://docs.celeryproject.org/en/latest/userguide/tasks.html#disable-rate-limits-if-they-re-not-used
    CELERY_DISABLE_RATE_LIMITS = True

    CELERY_QUEUES = _get_celery_queues()
    BROKER_HEARTBEAT = 10
    BROKER_HEARTBEAT_CHECKRATE = 2.0
    BROKER_POOL_LIMIT = 0

def celery_apps_factory(app_type, sync_broker_url, async_broker_url, service_name):
    protocol = 'pyamqp' if app_type == 'SUBSCRIBER' else 'librabbitmq'

    broker_url_sync = protocol + '://' + sync_broker_url
    broker_url_async = protocol + '://' + async_broker_url

    sync_app = Celery(service_name + '.sync_app', broker=broker_url_sync)
    sync_app.config_from_object(CeleryConfig)

    async_app = Celery(service_name + '.async_app', broker=broker_url_async)
    async_app.config_from_object(CeleryConfig)
    async_app.conf.CELERY_IGNORE_RESULT = True

    return sync_app, async_app

In the routing configuration above the SOA platform would have a common prefix mp (mp = my platform…. just an example). Every service would therefor be prefixed with mp (e.g. mplogin would be name for the login service).

A task name would consist of service name (the same as exchange name), “sync” or “async” word and a task name – all separated by full stop. For example:

  • mplogin.sync.register

Is a task to register a new user. If you split it by commas, you can say that:

  • mplogin is the name of the exchange
  • mplogin.sync is the name of the queue
  • register is the name of the task (service method)

Here would be an example definition of the register task then:

sync_app, async_app = celery_apps_factory(
    app_type='SUBSCRIBER',
    sync_broker_url=settings.BROKER_URL_SYNC,
    async_broker_url=settings.BROKER_URL_ASYNC,
    service_name='mplogin',
)

@sync_app.task(name='mplogin.sync.register')
def register(user_obj):
    response = user_service.register(user_obj)

To register a new user, you could then call the task from the edge server:

login_sync_app, login_async_app = celery_apps_factory(
    app_type='PUBLISHER',
    sync_broker_url=settings.BROKER_URL_SYNC,
    async_broker_url=settings.BROKER_URL_ASYNC,
    service_name='mplogin',
)

login_sync_app.send_task(
    'mplogin.sync.register',
    kwargs={
        'user_obj': json_obj,
    },
).get()

I hope at least somebody will find this useful :)

Source: http://blog.richardknop.com/2014/05/using-celery-as-middleware-in-soa/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Create a readonly user in Postgres

Create a readonly user in Postgres

This is quite useful to create a user that can be used for backups, reporting and so on. Assuming you have databases foo_db and bar_db and want to create a readonly user for them called backup_user with password qux94874:

CREATE USER backup_user WITH ENCRYPTED PASSWORD 'qux94874';
GRANT CONNECT ON DATABASE foo_db to backup_user;
GRANT CONNECT ON DATABASE bar_db to backup_user;
\c foo
GRANT USAGE ON SCHEMA public to backup_user;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO backup_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup_user;
\c bar
GRANT USAGE ON SCHEMA public to backup_user;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA public TO backup_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup_user;

Source: http://blog.richardknop.com/2014/04/create-a-readonly-user-in-postgres/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Round half to even in Go

Round half to even in Go

Rounding half to even is used a lot when dealing with financial transactions. Here is my implementation in Go:

import "math"

// http://en.wikipedia.org/wiki/Rounding#Round_half_to_even
func roundHalfToEven(f float64) float64 {
        integer, fraction := math.Modf(f)
        f = integer
        if 0.5 == math.Abs(fraction) {
                if 0 != math.Mod(f, 2) {
                        f += math.Copysign(1, integer)
                }
        } else {
                f += math.Copysign(float64(int(fraction+math.Copysign(0.5, fraction))), fraction)
        }
        return f
}

And here are unit tests:

import "testing"

func TestRoundHalfToEven(t *testing.T) {

        testCases := []struct {
                numberToRound  float64
                expectedResult float64
        }{
                {
                        // 23.5 =~ 24
                        numberToRound:  23.5,
                        expectedResult: 24,
                },
                {
                        // 24.5 =~ 24
                        numberToRound:  24.5,
                        expectedResult: 24,
                },
                {
                        // 7.58 =~ 8
                        numberToRound:  7.58,
                        expectedResult: 8,
                },
                {
                        // 7.46 =~ 7
                        numberToRound:  7.46,
                        expectedResult: 7,
                },
                {
                        // 5 =~ 5
                        numberToRound:  5,
                        expectedResult: 5,
                },
                {
                        // -23.5 =! -24
                        numberToRound:  -23.5,
                        expectedResult: -24,
                },
                {
                        // -24.5 =~ -24
                        numberToRound:  -24.5,
                        expectedResult: -24,
                },
                {
                        // -7.58 =~ -8
                        numberToRound:  -7.58,
                        expectedResult: -8,
                },
                {
                        // -7.46 =~ -7
                        numberToRound:  -7.46,
                        expectedResult: -7,
                },
                {
                        // -5 =~ -5
                        numberToRound:  -5,
                        expectedResult: -5,
                },
                {
                        // 0 =~ 0
                        numberToRound:  0,
                        expectedResult: 0,
                },
        }

        for i, tc := range testCases {
                actualResult := roundHalfToEven(tc.numberToRound)
                if actualResult != tc.expectedResult {
                        t.Errorf("%v rounded half to even should be %v, instead of %v (%v)", tc.numberToRound, tc.expectedResult, actualResult, i)
                }
        }

}

Source: http://blog.richardknop.com/2013/11/round-half-to-even-in-go/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

Permutation algorithm

Permutation algorithm

You might remember me writing several articles explaining different sorting algorithms. I might come back to that series as there are few sorting algorithms I haven’t gone through.

But I wanted to do something else now, this is an interesting algorithm to find all permutations of a string.

Let’s say you have string “bar”. The steps are as follows:

  1. If the string is just a one letter, return it.
  2. Remove the first letter of the string and find all permutations of the new string. Do this recursively.
  3. For each found permutation, insert the remove letter from the previous step at every single position. Add each of these strings as a result.
  4. Return the array of results.

Here is implementation in JavaScript:

function permutations(word) {
    if (word.length <= 1) {
        return [word];
    }

    var perms = permutations(word.slice(1, word.length)),
    char = word[0],
    result = [],
    i;
    perms.forEach(function (perm) {
        for (i = 0; i < perm.length + 1; i += 1) {
            result.push(perm.slice(0, i) + char + perm.slice(i, perm.length));
        }
    });

    return result;
}

Source: http://blog.richardknop.com/2013/10/permutation-algorithm/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

My new iPad game

My new iPad game

My new iPad game (School of Alchemy) just got approved by Apple. You can download it from the app store.

It was quite a fun project, I learned a lot about iOS web views and JavaScript / HTML5, the game is funny and features beautiful retina display optimised images, I’m sure you will like it.

Feel free to retweet:

My new cool iPad game, please download :) https://t.co/Yp54YaiLLZ

— Richard Knop (@richardknop) September 14, 2013

Here are few screenshots:

School of Alchemy #1

School of Alchemy #2

School of Alchemy #3

School of Alchemy #4

School of Alchemy #5

Source: http://blog.richardknop.com/2013/09/my-new-ipad-game/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>

My new iPad game

My new iPad game

My new iPad game (School of Alchemy) just got approved by Apple. You can download it from the app store.

It was quite a fun project, I learned a lot about iOS web views and JavaScript / HTML5, the game is funny and features beautiful retina display optimised images, I’m sure you will like it.

Feel free to retweet:

My new cool iPad game, please download :) https://t.co/Yp54YaiLLZ

— Richard Knop (@richardknop) September 14, 2013

Here are few screenshots:

School of Alchemy #1

School of Alchemy #2

School of Alchemy #3

School of Alchemy #4

School of Alchemy #5

Source: http://blog.richardknop.com/2013/09/my-new-ipad-game/

<!–
var d = new Date();
r = escape(d.getTime()*Math.random());
document.writeln('’);
//–>