LWIP之SOCKET的实现 下载本文

内容发布更新时间 : 2024/5/3 17:20:34星期一 下面是文章的全部内容请认真阅读。

LWIP之SOCKET的实现

http://bluefish.blog.51cto.com/214870/158413

Lwip协议栈的实现目的,无非是要上层用来实现app的socket编程。好,我们就从socket开始。为了兼容性,lwip的socket应该也是提供标准的socket接口函数,恩,没错,在src\\include\\lwip\\socket.h文件中可以看到下面的宏定义: #if LWIP_COMPAT_SOCKETS

#define accept(a,b,c) lwip_accept(a,b,c) #define bind(a,b,c) lwip_bind(a,b,c) #define shutdown(a,b) lwip_shutdown(a,b) #define closesocket(s) lwip_close(s) #define connect(a,b,c) lwip_connect(a,b,c) #define getsockname(a,b,c) lwip_getsockname(a,b,c) #define getpeername(a,b,c) lwip_getpeername(a,b,c) #define setsockopt(a,b,c,d,e) lwip_setsockopt(a,b,c,d,e) #define getsockopt(a,b,c,d,e) lwip_getsockopt(a,b,c,d,e) #define listen(a,b) lwip_listen(a,b) #define recv(a,b,c,d) lwip_recv(a,b,c,d)

#define recvfrom(a,b,c,d,e,f) lwip_recvfrom(a,b,c,d,e,f) #define send(a,b,c,d) lwip_send(a,b,c,d) #define sendto(a,b,c,d,e,f) lwip_sendto(a,b,c,d,e,f) #define socket(a,b,c) lwip_socket(a,b,c) #define select(a,b,c,d,e) lwip_select(a,b,c,d,e) #define ioctlsocket(a,b,c) lwip_ioctl(a,b,c)

#if LWIP_POSIX_SOCKETS_IO_NAMES

#define read(a,b,c) lwip_read(a,b,c) #define write(a,b,c) lwip_write(a,b,c) #define close(s) lwip_close(s)

先不说实际的实现函数,光看这些定义的宏,就是标准socket所必须有的接口。

接着看这些实际的函数实现。这些函数实现在src\\api\\socket.c中。先看下接受连接的函数,这个是tcp的

原型:int lwip_accept(int s, struct sockaddr *addr, socklen_t *addrlen) 可以看到这里的socket类型参数 s,实际上是个int型 在这个函数中的第一个函数调用是sock = get_socket(s);

这里的sock变量类型是lwip_socket,定义如下:

/** Contains all internal pointers and states used for a socket */

struct lwip_socket {

/** sockets currently are built on netconns, each socket has one netconn */

struct netconn *conn;

/** data that was left from the previous read */ struct netbuf *lastdata;

/** offset in the data that was left from the previous read */ u16_t lastoffset;

/** number of times data was received, set by event_callback(),

tested by the receive and select functions */

u16_t rcvevent;

/** number of times data was received, set by event_callback(),

tested by select */

u16_t sendevent;

/** socket flags (currently, only used for O_NONBLOCK) */ u16_t flags;

/** last error that occurred on this socket */

int err; };

好,这个结构先不管它,接着看下get_socket函数的实现【也是在src\\api\\socket.c文件中】,在这里我们看到这样一条语句sock = &sockets[s];很明显,返回值也是这个sock,它是根据传进来的序列号在sockets数组中找到对应的元素并返回该元素的地址。好了,那么这个sockets数组是在哪里被赋值了这些元素的呢?

进行到这里似乎应该从标准的socket编程的开始,也就是socket函数讲起,那我们就顺便看一下。它对应的实际实现是下面这个函数

Int lwip_socket(int domain, int type, int protocol)【src\\api\\socket.c】 这个函数根据不同的协议类型,也就是函数中的type参数,创建了一个netconn结构体的指针,接着就是用这个指针作为参数调用了alloc_socket函数,下面具体看下这个函数的实现

static int alloc_socket(struct netconn *newconn) {

int i;

/* Protect socket array */ sys_sem_wait(socksem);

/* allocate a new socket identifier */

for (i = 0; i < NUM_SOCKETS; ++i) { if (!sockets[i].conn) {

sockets[i].conn = newconn; sockets[i].lastdata = NULL; sockets[i].lastoffset = 0;

sockets[i].rcvevent = 0;

sockets[i].sendevent = 1; /* TCP send buf is empty */ sockets[i].flags = 0; sockets[i].err = 0; sys_sem_signal(socksem); return i; } }

sys_sem_signal(socksem); return -1; }

对了,就是这个时候对全局变量sockets数组的元素赋值的。

既然都来到这里了,那就顺便看下netconn结构的情况吧。它的学名叫netconn descriptor

/** A netconn descriptor */

struct netconn {

/** type of the netconn (TCP, UDP or RAW) */ enum netconn_type type;

/** current state of the netconn */

enum netconn_state state;

/** the lwIP internal protocol control block */ union {

struct ip_pcb *ip; struct tcp_pcb *tcp; struct udp_pcb *udp; struct raw_pcb *raw; } pcb;

/** the last error this netconn had */ err_t err;

/** sem that is used to synchroneously execute functions in the core context */ sys_sem_t op_completed;

/** mbox where received packets are stored until they are fetched

by the netconn application thread (can grow quite big) */

sys_mbox_t recvmbox;

/** mbox where new connections are stored until processed

by the application thread */

sys_mbox_t acceptmbox;

/** only used for socket layer */ int socket;

#if LWIP_SO_RCVTIMEO

/** timeout to wait for new data to be received

(or connections to arrive for listening netconns) */

int recv_timeout;

#endif /* LWIP_SO_RCVTIMEO */ #if LWIP_SO_RCVBUF

/** maximum amount of bytes queued in recvmbox */ int recv_bufsize;

#endif /* LWIP_SO_RCVBUF */ u16_t recv_avail;

/** TCP: when data passed to netconn_write doesn't fit into the send buffer,

this temporarily stores the message. */

struct api_msg_msg *write_msg;

/** TCP: when data passed to netconn_write doesn't fit into the send buffer,

this temporarily stores how much is already sent. */

int write_offset;

#if LWIP_TCPIP_CORE_LOCKING

/** TCP: when data passed to netconn_write doesn't fit into the send buffer, this temporarily stores whether to wake up the original application task if data couldn't be sent in the first try. */

u8_t write_delayed;

#endif /* LWIP_TCPIP_CORE_LOCKING */

/** A callback function that is informed about events for this netconn */ netconn_callback callback; };【src\\include\\lwip\\api.h】

到此,对这个结构都有些什么,做了一个大概的了解。

下面以SOCK_STREAM类型为例,看下netconn的new过程: 在lwip_socket函数中有 case SOCK_DGRAM:

conn = netconn_new_with_callback( (protocol == IPPROTO_UDPLITE) ? NETCONN_UDPLITE : NETCONN_UDP, event_callback); #define netconn_new_with_callback(t, c) netconn_new_with_proto_and_callback(t, 0, c)

简略实现如下: struct netconn*

netconn_new_with_proto_and_callback(enum netconn_type t, u8_t proto, netconn_callback callback) {

struct netconn *conn; struct api_msg msg;

conn = netconn_alloc(t, callback); if (conn != NULL ) {

msg.function = do_newconn; msg.msg.msg.n.proto = proto;